AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
by Trevor Chow, Basil Halperin, and J. Zachary Mazlish
In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:
Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.
Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.
In the rest of this post we flesh out this argument.
Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.
Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.
We show that in the historical data, indeed, real interest rates have been correlated with future growth.
Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.
In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.
We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.
Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.
An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.
Interpretation. We view our argument as the best existing outside view evidence on AI timelines – but also as only one model among a mixture of models that you should consider when thinking about AI timelines. The logic here is a simple implication of a few basic concepts in orthodox economic theory and some supporting empirical evidence, which is important because the unprecedented nature of transformative AI makes “reference class”-based outside views difficult to construct. This outside view approach contrasts with, and complements, an inside view approach, which attempts to build a detailed structural model of the world to forecast timelines (e.g. Cotra 2020; see also Nostalgebraist 2022).
Outline. If you want a short version of the argument, sections I and II (700 words) are the heart of the post. Additionally, the section titles are themselves summaries, and we use text formatting to highlight key ideas.
I. Long-term real rates would be high if the market was pricing advanced AI
Real interest rates reflect, among other things:
Time discounting, which includes the probability of death
Expectations of future economic growth
This claim is compactly summarized in the “Ramsey rule” (and the only math that we will introduce in this post), a version of the “Euler equation” that in one form or another lies at the heart of every theory and model of dynamic macroeconomics:
is the real interest rate over a given time horizon
is time discounting over that horizon
is a (positive) preference parameter reflecting how much someone cares about smoothing consumption over time
is the growth rate
(Internalizing the meaning of these Greek letters is wholly not necessary.)
While more elaborate macroeconomic theories vary this equation in interesting and important ways, it is common to all of these theories that the real interest rate is higher when either (1) the time discount rate is high or (2) future growth is expected to be high.
We now provide some intuition for these claims.
Time discounting and mortality risk. Time discounting refers to how much people discount the future relative to the present, which captures both (i) intrinsic preference for the present relative to the future and (ii) the probability of death.
The intuition for why the probability of death raises the real rate is the following. Suppose we expect with high probability that humanity will go extinct next year. Then there is no reason to save today: no one will be around to use the savings. This pushes up the real interest rate, since there is less money available for lending.
Economic growth. To understand why higher economic growth raises the real interest rate, the intuition is similar. If we expect to be wildly rich next year, then there is also no reason to save today: we are going to be tremendously rich, so we might as well use our money today while we’re still comparatively poor.
(For the formal math of the Euler equation, Baker, Delong, and Krugman 2005 is a useful reference. The core intuition is that either mortality risk or the prospect of utopian abundance reduces the supply of savings, due to consumption smoothing logic, which pushes up real interest rates.)
Transformative AI and real rates. Transformative AI would either raise the risk of extinction (if unaligned), or raise economic growth rates (if aligned).
Therefore, based on the economic logic above, the prospect of transformative AI – unaligned or aligned – will result in high real interest rates. This is the key claim of this post.
As an example in the aligned case, Davidson (2021) usefully defines AI-induced “explosive growth” as an increase in growth rates to at least 30% annually. Under a baseline calibration where and , and importantly assuming growth rates are known with certainty, the Euler equation implies that moving from 2% growth to 30% growth would raise real rates from 3% to 31%!
For comparison, real rates in the data we discuss below have never gone above 5%.
(In using terms like “transformative AI” or “advanced AI”, we refer to the cluster of concepts discussed in Yudkowsky 2008, Bostrom 2014, Cotra 2020, Carlsmith 2021, Davidson 2021, Karnofsky 2022, and related literature: AI technology that precipitates a transition comparable to the agricultural or industrial revolutions.)
II. But: long-term real rates are low
The US 30-year real interest rate ended 2022 at 1.6%. Over the full year it averaged 0.7%, and as recently as March was below zero. Looking at a shorter time horizon, the US 10-year real interest rate is 1.6%, and similarly was below negative one percent as recently as March.
(Data sources used here are explained in section V.)
The UK in autumn 2021 sold a 50-year real bond with a −2.4% rate at the time. Real rates on analogous bonds in other developed countries in recent years have been similarly low/negative for the longest horizons available. Austria has a 100-year nominal bond – being nominal should make its rate higher due to expected inflation – with yields less than 3%.
Thus the conclusion previewed above: financial markets, as evidenced by real interest rates, are not expecting a high probability of either AI-induced growth acceleration or elevated existential risk, on at least a 30-50 year time horizon.
III. Uncertainty, takeoff speeds, inequality, and stocks
In this section we briefly consider some potentially important complications.
Uncertainty. The Euler equation and the intuition described above assumed certainty about AI timelines, but taking into account uncertainty does not change the core logic. With uncertainty about the future economic growth rate, then the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure: in brief, probabilities of different states are reweighted by their marginal utility. We return to this in our quantitative model below.
Takeoff speeds. Nothing in the logic above relating growth to real rates depends on slow vs. fast takeoff speed; the argument can be reread under either assumption and nothing changes. Likewise, when considering the case of aligned AI, rates should be elevated whether economic growth starts to rise more rapidly before advanced AI is developed or only does so afterwards. What matters is that GDP – or really, consumption – ends up high within the time horizon under consideration. As long as future consumption will be high within the time horizon, then there is less motive to save today (“consumption smoothing”), pushing up the real rate.
Inequality. The logic above assumed that the development of transformative AI affects everyone equally. This is a reasonable assumption in the case of unaligned AI, where it is thought that all of humanity will be evaporated. However, when considering aligned AI, it may be thought that only some will benefit, and therefore real interest rates will not move much: if only an elite Silicon Valley minority is expected to have utopian wealth next year, then everyone else may very well still choose to save today.
It is indeed the case that inequality in expected gains from transformative AI would dampen the impact on real rates, but this argument should not be overrated. First, asset prices can be crudely thought of as reflecting a wealth-weighted average across investors. Even if only an elite minority becomes fabulously wealthy, it is their desire for consumption smoothing which will end up dominating the determination of the real rate. Second, truly transformative AI leading to 30%+ economy-wide growth (“Moore’s law for everything”) would not be possible without having economy-wide benefits.
Stocks. One naive objection to the argument here would be the claim that real interest rates sound like an odd, arbitrary asset price to consider; certainly stock prices are the asset price that receive the most media attention.
In appendix 1, we explain that the level of the real interest rate affects every asset price: stocks for instance reflect the present discounted value of future dividends; and real interest rates determine the discount rate used to discount those future dividends. Thus, if real interest rates are ‘wrong’, every asset price is wrong. If real interest rates are wrong, a lot of money is on the table, a point to which we return in section X.
We also argue that stock prices in particular are not a useful indicator of market expectations of AI timelines. Above all, high stock prices of chipmakers or companies like Alphabet (parent of DeepMind) could only reflect expectations for aligned AI and could not be informative of the risk of unaligned AI. Additionally, as we explain further in the appendix, aligned AI could even lower equity prices, by pushing up discount rates.
IV. Historical data on interest rates supports the theory: preliminaries
In section I, we gave theoretical intuition for why higher expected growth or higher existential risk would result in higher interest rates: expectations for such high growth or mortality risk would lead people to want to save less and borrow more today. In this section and the next two, we showcase some simple empirical evidence that the predicted relationships hold in the available data.
Measuring real rates. To compare historical real interest rates to historical growth, we need to measure real interest rates.
Most bonds historically have been nominal, where the yield is not adjusted for changes in inflation. Therefore, the vast majority of research studying real interest rates starts with nominal interest rates, attempts to construct an estimate of expected inflation using some statistical model, and then subtracts this estimate of expected inflation from the nominal rate to get an estimated real interest rate. However, constructing measures of inflation expectations is extremely difficult, and as a result most papers in this literature are not very informative.
Additionally, most bonds historically have had some risk of default. Adjusting for this default premium is also extremely difficult, which in particular complicates analysis of long-run interest rate trends.
The difficulty in measuring real rates is one of the main causes, in our view, of Tyler Cowen’s Third Law: “all propositions about real interest rates are wrong”. Throughout this piece, we are badly violating this (Gödelian) Third Law. In appendix 2, we expand on our argument that the source of Tyler’s Third Law is measurement issues in the extant literature, together with some separate, frequent conceptual errors.
Our approach. We take a more direct approach.
Real rates. For our primary analysis, we instead use market real interest rates from inflation-linked bonds. Because we use interest rates directly from inflation-linked bonds – instead of constructing shoddy estimates of inflation expectations to use with nominal interest rates – this approach avoids the measurement issue just discussed (and, we argue, allows us to escape Cowen’s Third Law).
To our knowledge, prior literature has not used real rates from inflation-linked bonds only because these bonds are comparatively new. Using inflation-linked bonds confines our sample to the last ~20 years in the US, the last ~30 in the UK/Australia/Canada. Before that, inflation-linked bonds didn’t exist. Other countries have data for even fewer years and less liquid bond markets.
(The yields on inflation-linked bonds are not perfect measures of real rates, because of risk premia, liquidity issues, and some subtle issues with the way these securities are structured. You can build a model and attempt to strip out these issues; here, we will just use the raw rates. If you prefer to think of these empirics as “are inflation-linked bond yields predictive of future real growth” rather than “are real rates predictive of future real growth”, that interpretation is still sufficient for the logic of this post.)
Nominal rates. Because there are only 20 or 30 years of data on real interest rates from inflation-linked bonds, we supplement our data by also considering unadjusted nominal interest rates. Nominal interest rates reflect real interest rates plus inflation expectations, so it is not appropriate to compare nominal interest rates to real GDP growth.
Instead, analogously to comparing real interest rates to real GDP growth, we compare nominal interest rates to nominal GDP growth. The latter is not an ideal comparison under economic theory – and inflation variability could swamp real growth variability – but we argue that this approach is simple and transparent.
Looking at nominal rates allows us to have a very large sample of countries for many decades: we use OECD data on nominal rates available for up to 70 years across 39 countries.
V. Historical data on interest rates supports the theory: graphs
The goal of this section is to show that real interest rates have correlated with future real economic growth, and secondarily, that nominal interest rates have correlated with future nominal economic growth. We also briefly discuss the state of empirical evidence on the correlation between real rates and existential risk.
Real rates vs. real growth. A first cut at the data suggests that, indeed, higher real rates today predict higher real growth in the future:
To see how to read these graphs, take the left-most graph (“10-year horizon”) for example. The x-axis shows the level of the real interest rate, as reflected on 10-year inflation linked bonds. The y-axis shows average real GDP growth over the following 10 years.
The middle and right hand graphs show the same, at the 15-year and 20-year horizons. The scatter plot shows all available data for the US (since 1999), the UK (since 1985), Australia (since 1995), and Canada (since 1991). (Data for Australia and Canada is only available at the 10-year horizon, and comes from Augur Labs.)
Eyeballing the figure, there appears to be a strong relationship between real interest rates today and future economic growth over the next 10-20 years.
To our knowledge, this simple stylized fact is novel.
Caveats. “Eyeballing it” is not a formal econometric method; but, this is a blog post not a journal article (TIABPNAJA). We do not perform any formal statistical tests here, but we do want to acknowledge some important statistical points and other caveats.
First, the data points in the scatter plot are not statistically independent: real rates and growth are both persistent variables; the data points contain overlapping periods; and growth rates in these four countries are correlated. These issues are evident even from eyeballing the time series. Second, of course this relationship is not causally identified: we do not have exogenous variation in real growth rates. (If you have ideas for identifying the causal effect of higher real growth expectations on real rates, we would love to discuss with you.)
Relatedly, many other things are changing in the world which are likely to affect real rates. Population growth is slowing, retirement is lengthening, the population is aging. But under AI-driven “explosive” growth – again say 30%+ annual growth, following the excellent analysis of Davidson (2021) – then, we might reasonably expect that this massive of an increase in the growth rate would drown out the impact of any other factors.
Nominal rates vs. nominal growth. Turning now to evidence from nominal interest rates, recall that the usefulness of this exercise is that while there only exists 20 or 30 years of data on real interest rates for two countries, there is much more data on nominal interest rates.
We simply take all available data on 10-year nominal rates from the set of 39 OECD countries since 1954. The following scatterplot compares the 10-year nominal interest versus nominal GDP growth over the succeeding ten years by country:
Again, there is a strong positive – if certainly not perfect – relationship. (For example, the outlier brown dots at the bottom of the graph are Greece, whose high interest rates despite negative NGDP growth reflect high default risk during an economic depression.)
The same set of nontrivial caveats apply to this analysis as above.
We consider this data from nominal rates to be significantly weaker evidence than the evidence from real rates, but corroboration nonetheless.
Backing out market-implied timelines. Taking the univariate pooled OLS results from the real rate data far too seriously, the fact that the 10-year real rate in the US ended 2022 at 1.6% would predict average annual real GDP growth of 2.6% over the next 10 years in the US; the analogous interest rate of −0.2% in the UK would predict 0.7% annual growth over the next 10 years in the UK. Such growth rates, clearly, are not compatible with the arrival of transformative aligned AI within this horizon.
VI. Empirical evidence on real rates and mortality risk
We have argued that in the theory, real rates should be higher in the face of high economic growth or high mortality risk; empirically, so far, we have only shown a relationship between real rates and growth, but not between real rates and mortality.
Showing that real rates accurately reflect changes in existential risk is very difficult, because there is no word-of-god measurement of how existential risk has evolved over time.
We would be very interested in pursuing new empirical research examining “asset pricing under existential risk”. In appendix 3, we perform a scorched-earth literature review and find essentially zero existing empirical evidence on real rates and existential risks.
Disaster risk. In particular, the extant literature does not study existential risks but instead “merely” disaster risks, under which real assets are devastated but humanity is not exterminated. Disaster risks do not necessarily raise real rates – indeed, such risks are thought to lower real rates due to precautionary savings. That notwithstanding, some highlights of the appendix review include a small set of papers finding that individuals with a higher perceived risk of nuclear conflict during the Cold War saved less, as well as a paper noting that equities which were headquartered in cities more likely to be targeted by Soviet missiles did worse during the Cuban missile crisis (see also). Our assessment is that these and the other available papers on disaster risks discussed in the appendix have severe limitations for the purposes here.
Individual mortality risk. We judge that the best evidence on this topic comes instead from examining the relationship between individual mortality risk and savings/investment behavior. The logic we provided was that if humanity will be extinct next year, then there is no reason to save, pushing up the real rate. Similar logic says that at the individual level, a higher risk of death for any reason should lead to lower savings and less investment in human capital. Examples of lower savings at the individual level need not raise interest rates at the economy-wide level, but do provide evidence for the mechanism whereby extinction risk should lead to lower saving and thus higher interest rates.
One example comes from Malawi, where the provision of a new AIDS therapy caused a significant increase in life expectancy. Using spatial and temporal variation in where and when these therapeutics were rolled out, it was found that increased life expectancy results in more savings and more human capital investment in the form of education spending. Another experiment in Malawi provided information to correct pessimistic priors about life expectancy, and found that higher life expectancy directly caused more investment in agriculture and livestock.
A third example comes from testing for Huntington’s disease, a disease which causes a meaningful drop in life expectancy to around 60 years. Using variation in when people are diagnosed with Huntington’s, it has been found that those who learn they carry the gene for Huntington’s earlier are 30 percentage points less likely to finish college, which is a significant fall in their human capital investment.
Studying the effect on savings and real rates from increased life expectancy at the population level is potentially intractable, but would be interesting to consider further. Again, in our assessment, the best empirical evidence available right now comes from the research on individual “existential” risks and suggests that real rates should increase with existential risk.
VII. Plugging the Cotra probabilities into a simple quantitative model of real interest rates predicts very high rates
Section VI used historical data to go from the current real rate to a very crude market-implied forecast of growth rates; in this section, we instead use a model to go from existing forecasts of AI timelines to timeline-implied real rates. We aim to show that under short AI timelines, real interest rates would be unrealistically elevated.
This is a useful exercise for three reasons. First, the historical data is only able to speak to growth forecasts, and therefore only able to provide a forecast under the possibly incorrect assumption of aligned AI. Second, the empirical forecast assumes a linear relationship between the real rate and growth, which may not be reasonable for a massive change caused by transformative AI. Third and quite important, the historical data cannot transparently tell us anything about uncertainty and the market’s beliefs about the full probability distribution of AI timelines.
We use the canonical (and nonlinear) version of the Euler equation – the model discussed in section I – but now allow for uncertainty on both how soon transformative AI will be developed and whether or not it will be aligned. The model takes as its key inputs (1) a probability of transformative AI each year, and (2) a probability that such technology is aligned.
The model is a simple application of the stochastic Euler equation under an isoelastic utility function. We use the following as a baseline, before considering alternative probabilities:
We use smoothed Cotra (2022) probabilities for transformative AI over the next 30 years: a 2% yearly chance until 2030, a 3% yearly chance through 2036, and a 4% yearly chance through 2052.
We use the FTX Future Fund’s median estimate of 15% for the probability that AI is unaligned conditional on the development of transformative AI.
With the arrival of aligned AI, we use the Davidson (2020) assumption of 30% annual economic growth; with the arrival of unaligned AI, we assume human extinction. In the absence of the development of transformative AI, we assume a steady 1.8% growth rate.
We calibrate the pure rate of subjective time preference to 0.01 and the consumption smoothing parameter (i.e. inverse of the elasticity of intertemporal substitution) as 1, following the economic literature.
Thus, to summarize: by default, GDP grows at 1.8% per year. Every year, there is some probability (based on Cotra) that transformative AI is developed. If it is developed, there is a 15% probability the world ends, and an 85% chance GDP growth jumps to 30% per year.
We have built a spreadsheet here that allows you to tinker with the numbers yourself, such as adjusting the growth rate under aligned AI, to see what your timelines and probability of alignment would imply for the real interest rate. (It also contains the full Euler equation formula generating the results, for those who want the mathematical details.) We first estimate real rates under the baseline calibration above, before considering variations in the critical inputs.
Baseline results. The model predicts that under zero probability of transformative AI, the real rate at any horizon would be 2.8%. In comparison, under the baseline calibration just described based on Cotra timelines, the real rate at a 30-year horizon would be pushed up to 5.9% – roughly three percentage points higher.
For comparison, the 30-year real rate in the US is currently 1.6%.
While the simple Euler equation somewhat overpredicts the level of the real interest rate even under zero probability of transformative AI – the 2.8% in the model versus the 1.6% in the data – this overprediction is explainable by the radical simplicity of the model that we use and is a known issue in the literature. Adding other factors (e.g. precautionary savings) to the model would lower the level. Changing the level does not change its directional predictions, which help quantitatively explain the fall in real rates over the past ~30 years.
Therefore, what is most informative is the three percentage point difference between the real rate under Cotra timelines (5.9%) versus under no prospect of transformative AI (2.8%): Cotra timelines imply real interest rates substantially higher than their current levels.
Now, from this baseline estimate, we can also consider varying the key inputs.
Varying assumptions on P(misaligned|AGI). First consider changing the assumption that advanced AI is 15% likely to be unaligned (conditional on the development of AGI). Varying this parameter does not have a large impact: moving from 0% to 100% probability of misalignment raises the model’s predicted real rate from 5.8% only to 6.3%.
Varying assumptions on timelines. Second, consider making timelines shorter or longer. In particular, consider varying the probability of development by 2043, which we use as a benchmark per the FTX Future Fund.
We scale the Cotra timelines up and down to vary the probability of development by 2043. (Specifically: we target a specific cumulative probability of development by 2043; and, following Cotra, if the annual probability up until 2030 is , then it is in the subsequent seven years up through 2036, and it is in the remaining years of the 30-year window.)
As the next figure shows and as one might expect, shorter AI timelines have a very large impact on the model’s estimate for the real rate.
The original baseline parameterization from Cotra corresponds to the FTX Future Fund “upper threshold” of a 45% chance of development by 2043, which generated the 3 percentage point increase in the 30-year real rate discussed above.
The Future Fund’s median of a 20% probability by 2043 generates a 1.1 percentage point increase in the 30-year real rate.
The Future Fund’s “lower threshold” of a 10% probability by 2043 generates a 0.5 percentage point increase in the real rate.
These results strongly suggest that any timeline shorter than or equal to the Cotra timeline is not being expected by financial markets.
VIII. Markets are decisively rejecting the shortest possible timelines
While it is not possible to back out exact numbers for the market’s implicit forecast for AI timelines, it is reasonable to say that the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.
Consider the following examples of extremely short timelines:
Five year timelines: With a 50% probability of transformative AI by 2027, and the same yearly probability thereafter, the model predicts 13.0pp higher 30-year real rates today!
Ten year timelines: With a 50% probability of transformative AI by 2032, and the same yearly probability thereafter, the model predicts 6.5pp higher 30-year real rates today.
Real rate movements of these magnitudes are wildly counterfactual. As previously noted, real rates in the data used above have never gone above even 5%.
Stagnation. As a robustness check, in the configurable spreadsheet we allow you to place some yearly probability on the economy stagnating and growing at 0% per year from thereon. Even with a 20% chance of stagnation by 2053 (higher than realistic), under Cotra timelines, the model generates a 2.1% increase in 30-year rates.
Recent market movements. Real rates have increased around two percentage points since the start of 2022, with the 30-year real rate moving from −0.4% to 1.6%, approximately the pre-covid level. This is a large enough move to merit discussion. While this rise in long-term real rates could reflect changing market expectations for timelines, it seems much more plausible that high inflation, the Russia-Ukraine war, and monetary policy tightening have together worked to drive up short-term real rates and the risk premium on long-term real rates.
IX. Financial markets are the most powerful information aggregators produced by the universe (so far)
Should we update on the fact that markets are not expecting very short timelines?
As a prior, we think that market efficiency is reasonable. We do not try to provide a full defense of the efficient markets hypothesis (EMH) in this piece given that it has been debated ad nauseum elsewhere, but here is a scaffolding of what such an argument would look like.
Loosely, the EMH says that the current price of any security incorporates all public information about it, and as such, you should not expect to systematically make money by trading securities.
This is simply a no-arbitrage condition, and certainly no more radical than supply and demand: if something is over- or under-priced, you’ll take action based on that belief until you no longer believe it. In other words, you’ll buy and sell it until you think the price is right. Otherwise, there would be an unexploited opportunity for profit that was being left on the table, and there are no free lunches when the market is in equilibrium.
As a corollary, the current price of a security should be the best available risk-adjusted predictor of its future price. Notice we didn’t say that the price is equal to the “correct” fundamental value. In fact, the current price is almost certainly wrong. What we did say is that it is the best guess, i.e. no one knows if it should be higher or lower.
Testing this hypothesis is difficult, in the same way that testing any equilibrium condition is difficult. Not only is the equilibrium always changing, there is also a joint hypothesis problem which Fama (1970) outlined: comparing actual asset prices to “correct” theoretical asset prices means you are simultaneously testing whatever asset pricing model you choose, alongside the EMH.
In this sense, it makes no sense to talk about “testing” the EMH. Rather, the question is how quickly prices converge to the limit of market efficiency. In other words, how fast is information diffusion? Our position is that for most things, this is pretty fast!
Here are a few heuristics that support our position:
For our purposes, the earlier evidence on the link between real rates and growth is a highly relevant example of market efficiency.
There are notable examples of markets seeming to be eerily good at forecasting hard-to-anticipate events:
In the wake of the Challenger explosion, despite no definitive public information being released, the market seems to have identified which firm was responsible.
Economist Armen Alchian observed that the stock price of lithium producers spiked 461% following the public announcement of the first hydrogen bomb tests in 1954, while the prices of producers of other radioactive metals were flat. He circulated a paper within RAND, where he was working, identifying lithium as the material used in the tests, before the paper was suppressed by leadership who were apparently aware that indeed lithium was used. The market was prescient even though zero public information was released about lithium’s usage.
Remember: if real interest rates are wrong, all financial assets are mispriced. If real interest rates “should” rise three percentage points or more, that is easily hundreds of billions of dollars worth of revaluations. It is unlikely that sharp market participants are leaving billions of dollars on the table.
X. If markets are not efficient, you could be earning alpha and philanthropists could be borrowing
While our prior in favor of efficiency is fairly strong, the market could be currently failing to anticipate transformative AI, due to various limits to arbitrage.
However, if you do believe the market is currently wrong about the probability of short timelines, then we now argue there are two courses of action you should consider taking:
Bet on real rates rising (“get rich or die trying”)
Borrow today, including in order to fund philanthropy (“impatient philanthropy”)
1. Bet on real rates rising (“get rich or die trying”)
Under the logic argued above, if you genuinely believe that AI timelines are short, then you should consider putting your money where your mouth is: bet that real rates will rise when the market updates, and potentially earn a lot of money if markets correct. Shorting (or going underweight) government debt is the simplest way of expressing this view.
Indeed, AI safety researcher Paul Christiano has written publicly that he is (or was) short 30-year government bonds.
If short timelines are your true belief in your heart of hearts, and not merely a belief in a belief, then you should seriously consider how much money you could earn here and what you could do with those resources.
Implementing the trade. For retail investors, betting against treasuries via ETFs is perhaps simplest. Such trades can be done easily with retail brokers, like Schwab.
(i) For example, one could simply short the LTPZ ETF, which holds long-term real US government debt (effective duration: 20 years).
(ii) Alternatively, if you would prefer to avoid engaging in shorting yourself, there are ETFs which will do the shorting for you, with nominal bonds: TBF is an ETF which is short 20+ year treasuries (duration: 18 years); TBT is the same, but levered 2x; and TTT is the same, but levered 3x. There are a number of other similar options. Because these ETFs do the shorting for you, all you need to do is purchase shares of the ETFs.
Back of the envelope estimate. A rough estimate of how much money is on the table, just from shorting the US treasury bond market alone, suggests there is easily $1 trillion in value at stake from betting that rates will rise.
In response to a 1 percentage point rise in interest rates, the price of a bond falls in percentage terms by its “duration”, to a first-order approximation.
The average value-weighted duration of (privately-held) US treasuries is approximately 4 years.
So, to a first-order approximation, if rates rise by 3 percentage points, then the value of treasuries will fall by 12% (that is, 3*4).
The market cap of (privately-held) treasuries is approximately $17 trillion.
Thus, if rates rise by 3 percentage points, then the total value of treasuries can be expected to fall by $2.04 trillion (that is, 12%*17 trillion).
Slightly more than half (55%) of the interest rate sensitivity of the treasury market comes from bonds with maturity beyond 10 years. Assuming that the 3 percentage point rise occurs only at this horizon, and rounding down, we arrive at the $1 trillion estimate.
Alternatively, returning to the LTPZ ETF with its duration of 20 years, a 3 percentage point rise in rates would cause its value to fall by 60%. Using the 3x levered TTT with duration of 18 years, a 3 percentage point rise in rates would imply a mouth-watering cumulative return of 162%.
While fully fleshing out the trade analysis is beyond the scope of this post, this illustration gives an idea of how large the possibilities are.
The alternative to this order-of-magnitude estimate would be to build a complete bond pricing model to estimate more precisely the expected returns of shorting treasuries. This would need to take into account e.g. the convexity of price changes with interest rate movements, the varied maturities of outstanding bonds, and the different varieties of instruments issued by the Treasury. Further refinements would include trading derivatives (e.g. interest rate futures) instead of shorting bonds directly, for capital efficiency, and using leverage to increase expected returns.
Additionally, the analysis could be extended beyond the US government debt market, again since changes to real interest rates would plausibly impact the price of every asset: stocks, commodities, real estate, everything.
(If you would be interested in fully scoping out possible trades, we would be interested in talking.)
Trade risk and foom risk. We want to be clear that – unless you are risk neutral, or can borrow without penalty at the risk-free rate, or believe in short timelines with 100% probability – then such a bet would not be a free lunch: this is not an “arbitrage” in the technical sense of a risk-free profit. One risk is that the market moves in the other direction in the short term, before correcting, and that you are unable to roll over your position for liquidity reasons.
The other risk that could motivate not making this bet is the risk that the market – for some unspecified reason – never has a chance to correct, because (1) transformative AI ends up unaligned and (2) humanity’s conversion into paperclips occurs overnight. This would prevent the market from ever “waking up”.
However, to be clear, expecting this specific scenario requires both:
Buying into specific stories about how takeoff will occur: specifically, Yudkowskian foom-type scenarios with fast takeoff.
Having a lot of skepticism about the optimization forces pushing financial markets towards informational efficiency.
You should be sure that your beliefs are actually congruent with these requirements, if you want to refuse to bet that real rates will rise. Additionally, we will see that the second suggestion in this section (“impatient philanthropy”) is not affected by the possibility of foom scenarios.
2. Borrow today, including in order to fund philanthropy (“impatient philanthropy”)
If prevailing interest rates are lower than your subjective discount rate – which is the case if you think markets are underestimating prospects for transformative AI – then simple cost-benefit analysis says you should save less or even borrow today.
An illustrative example. As an extreme example to illustrate this argument, imagine that you think that there is a 50% chance that humanity will be extinct next year, and otherwise with certainty you will have the same income next year as you do this year. Suppose the market real interest rate is 0%. That means that if you borrow $10 today, then in expectation you only need to pay $5 off, since 50% of the time you expect to be dead.
It is only if the market real rate is 100% – so that your $10 loan requires paying back $20 next year, or exactly $10 in expectation – that you are indifferent about borrowing. If the market real rate is less than 100%, then you want to borrow. If interest rates are “too low” from your perspective, then on the margin this should encourage you to borrow, or at least save less.
Note that this logic is not affected by whether or not the market will “correct” and real rates will rise before everyone dies, unlike the logic above for trading.
Borrowing to fund philanthropy today. While you may want to borrow today simply to fund wild parties, a natural alternative is: borrow today, locking in “too low” interest rates, in order to fund philanthropy today. For example: to fund AI safety work.
We can call this strategy “impatient philanthropy”, in analogy to the concept of “patient philanthropy”.
This is not a call for philanthropists to radically rethink their cost-benefit analyses. Instead, we merely point out: ensure that your financial planning properly accounts for any difference between your discount rate and the market real rate at which you can borrow. You should not be using the market real rate to do your financial planning. If you have a higher effective discount rate due to your AI timelines, that could imply that you should be borrowing today to fund philanthropic work.
Relationship to impatient philanthropy. The logic here has a similar flavor to Phil Trammell’s “patient philanthropy” argument (Trammell 2021) – but with a sign flipped. Longtermist philanthropists with a zero discount rate, who live in a world with a positive real interest rate, should be willing to save all of their resources for a long time to earn that interest, rather than spending those resources today on philanthropic projects. Short-timeliners have a higher discount rate than the market, and therefore should be impatient philanthropists.
(The point here is not an exact analog to Trammell 2021, because the paper there considers strategic game theoretic considerations and also takes the real rate as exogenous; here, the considerations are not strategic and the endogeneity of the real rate is the critical point.)
XI. Conclusion: outside views vs. inside views & future work
We do not claim to have special technical insight into forecasting the likely timeline for the development of transformative artificial intelligence: we do not present an inside view on AI timelines.
However, we do think that market efficiency provides a powerful outside view for forecasting AI timelines and for making financial decisions. Based on prevailing real interest rates, the market seems to be strongly rejecting timelines of less than ten years, and does not seem to be placing particularly high odds on the development of transformative AI even 30-50 years from now.
We argue that market efficiency is a reasonable benchmark, and consequently, this forecast serves as a useful prior for AI timelines. If markets are wrong, on the other hand, then there is an enormous amount of money on the table from betting that real interest rates will rise. In either case, this market-based approach offers a useful framework: either for forecasting timelines, or for asset allocation.
Opportunities for future work. We could have put 1000 more hours into the empirical side or the model, but, TIABPNAJA. Future work we would be interested in collaborating on or seeing includes:
More careful empirical analyses of the relationship between real rates and growth. In particular, (1) analysis of data samples with larger variation in growth rates (e.g. with the Industrial Revolution, China or the East Asian Tigers), where a credible measure of real interest rates can be used; and (2) causally identified estimates of the relationship between real rates and growth, rather than correlations. Measuring historical real rates is the key challenge, and the main reason why we have not tried to address these here.
Any empirical analysis of how real rates vary with changing existential risk. Measuring changes in existential risk is the key challenge.
Alternative quantitative models on the relationship between real interest rates and growth/x-risk with alternative preference specifications, incomplete markets, or disaster risk.
Tests of market forecasting ability at longer time horizons for any outcome of significance; and comparisons of market efficiency at shorter versus longer time horizons.
Creation of sufficiently-liquid genuine market instruments for directly measuring outcomes we care about like long-horizon GDP growth: e.g. GDP swaps, GDP-linked bonds, or binary GDP prediction markets. (We emphasize market instruments to distinguish from forecasting platforms like Metaculus or play-money sites like Manifold Markets where the forceful logic of financial market efficiency simply does not hold.)
An analysis of the most capital-efficient way to bet on short AI timelines and the possible expected returns (“the greatest trade of all time”).
Analysis of the informational content of infinitely-lived assets: e.g. the discount rates embedded in land prices and rental contracts. There is an existing literature related to this topic: , , , , , , .
This literature estimates risky, nominal discount rates embedded in rental contracts out as far as 1000 years, and finds surprisingly low estimates – certainly less than 10%. This is potentially extremely useful information, though this literature is not without caveats. Among many other things, we cannot have the presumption of informational efficiency in land/rental markets, unlike financial markets, due to severe frictions in these markets (e.g. inability to short sell).
Thanks especially to Leopold Aschenbrenner, Nathan Barnard, Jackson Barkstrom, Joel Becker, Daniele Caratelli, James Chartouni, Tamay Besiroglu, Joel Flynn, James Howe, Chris Hyland, Stephen Malina, Peter McLaughlin, Jackson Mejia, Laura Nicolae, Sam Lazarus, Elliot Lehrer, Jett Pettus, Pradyumna Prasad, Tejas Subramaniam, Karthik Tadepalli, Phil Trammell, and participants at ETGP 2022 for very useful conversations on this topic and/or feedback on drafts.
Update 1: we have now posted a comment summarising our responses to the feedback we have received so far.
OpenAI’s ChatGPT model on what will happen to real rates if transformative AI is developed:
Some framings you can use to interpret this post:
“This blog post takes Fama seriously” [a la Mankiw-Romer-Weil]
“The market-clearing price does not hate you nor does it love you” [a la Yudkowsky]
“Existential risk and asset pricing” [a la Aschenbrenner 2020, Trammell 2021]
“Get rich or hopefully don’t die trying” [a la 50 Cent]
“You can short the apocalypse.” [contra Peter Thiel, cf Alex Tabarrok]
“Tired: market monetarism. Inspired: market longtermism.” [a la Scott Sumner]
“This is not not financial advice.” [a la the standard disclaimer]
Appendix 1. Against using stock prices to forecast AI timelines
Link to separate EA Forum post
Appendix 2. Explaining Tyler Cowen’s Third Law
Appendix 3. Asset pricing under existential risk: a literature review
Appendix 4. Supplementary Figures
- On AI and Interest Rates by 17 Jan 2023 15:00 UTC; 77 points) (LessWrong;
- AI Safety − 7 months of discussion in 17 minutes by 15 Mar 2023 23:41 UTC; 60 points) (
- Future Matters #7: AI timelines, AI skepticism, and lock-in by 3 Feb 2023 11:47 UTC; 54 points) (
- Non-utilitarian effective altruism by 29 Jan 2023 6:07 UTC; 41 points) (
- Against using stock prices to forecast AI timelines by 10 Jan 2023 16:04 UTC; 21 points) (
- AI Safety − 7 months of discussion in 17 minutes by 15 Mar 2023 23:41 UTC; 16 points) (LessWrong;
- 2 Mar 2023 3:10 UTC; 7 points)'s comment on Ryan Kidd’s Shortform by (LessWrong;
- 31 Jan 2023 22:30 UTC; 6 points)'s comment on Literature review of Transformative Artificial Intelligence timelines by (
- EA Economists Meetup by 27 Jan 2023 6:57 UTC; 5 points) (
- 10 Jan 2023 18:02 UTC; 1 point)'s comment on Three Impacts of Machine Intelligence by (
Lots of the comments here are pointing at details of the markets and whether it’s possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there’s a simple way to look at it that’s very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies’ target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
I think what you’re claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don’t contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn’t find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren’t mystical, they’re aggregations of traders’ cognition. If the cognition isn’t there, then the market price can’t tell you anything. If the cognition is there but it doesn’t control enough of the capital to move the price, then the price can’t tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don’t have much of a model of what cognition is behind those prices.
I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.
I don’t think this post is decisive evidence against short timelines. But neither do I think it’s a “trap” that relies on fully swallowing EMH. I think there’re deeper issues to unpack here about why much of the world doesn’t seem to put much weight on AGI coming any time soon.
Plenty of people at Jane Street that read LessWrong
Just a note on Jane Street in particular—nobody at Jane Street is making a potentially multi year bet on interest rates with Jane Street money. That’s simply not in the category of things that Jane Street trades. If someone at Jane Street wanted to make betting on this a significant part of what they do, they’d have to leave and go elsewhere and find someone to give them at least hundreds of millions of dollars to make the bet.
Jane street even hosted a foom debate between between Hanson and yudkowsky iirc.
(I don’t think this is substantial evidence on the validity of original post)
Yeah, I’m also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we’ve seen in 2022.
Thanks for this—I think you put really nicely the interpretation that we also are pushing for.
It’s unclear to me that just because the number/liquidity of traders “in the know” is not very small (e.g., it is more than 0.1% of capital) this leads to the market correcting itself. At least, I have some reservations about what I interpret to be the causal process. Suppose that some set of early investors correctly think that ~3% of investors will adopt their own reasoning and engage in similar actions (e.g., “shorting” the long-term bond market) about 6 years before AGI.
But despite all of their reasoning, a very large portion of capital-weighted investors still don’t believe (A) the whole AGI argument, or (B) that there’s much worth doing once they do believe the whole AGI argument (e.g., “well, I guess I should just try not to die before AGI and enjoy my last normal years with my family/friends”).
I see a few potential problems, but am not sure about enough details to know whether the market would suffer from these problems:
It seems plausible that large institutional investors will just balance against any large uptick early on, preventing investors from getting much of any profits in the first 10 or so years (leaving only 5-ish years for profits to start accumulating (albeit without considering discount rates));
Even once the potential for profit opens up or even if the previous point doesn’t apply very strongly, some investors might eventually think they’ll be left “holding the bag” if they ever run into a multi-year plateau in beliefs/capital movement. This could be a scenario where most of the “AGI-open-minded” investors have been tapped, but most other people in society are still skeptical (I.e., it isn’t a smooth distribution of open-mindedness). Short-term profit relies on the rates increasing after you go short, but if you don’t expect the rates to increase then you won’t enter the market and adjust the prices. But the expectation that the person after you might also have this reasoning in its recursive form disincentivizes you from entering, creating a cascading effect.
“Well, I’ll profit eventually, even if it takes 10 years of waiting”—not necessarily, or at that point you may not really enjoy the profits, as it may be “I have 8-figure assets but 3 years left of (normal) life.” I’m not confident that this is a sufficiently appealing offer to the people who could take you up on it and move the market.
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
Peter Thiel (in his “Optimistic Thought Experiment” essay about investing under anthropic shadow, which I analyzed in a Forum post) also thinks that there is a “failure of imagination” going on here, similar to what Gwern describes:
The markets reacted appropriately to covid. Match the Dow to forecasters’ and EAF’s prognostications and you’ll find that the markets moved in tandem with rational expectations.
Not only have I never heard this before, I was there and remember watching this not happen. Source?
The Dow plateaued in early January and crashed starting Feb 20th, tracking rational expectations and three weeks ahead of media/mass awareness, which only caught up around March 12th
Almost everyone I knew was concerned with the pandemic going global and dramatically disrupting our lives much sooner than Feb 20th. On January 26th, a post on the EA Forum, “Concerning the Recent 2019-Novel Coronavirus Outbreak”, made the case we should be worried. By a few weeks later than that, everyone I know was already bracing for covid to hit the US. Looking back at my house Discord server, we had the “if we have to go weeks without leaving the house, is there anything we’d run out of? Let’s buy it now” conversation February 6th (which is also when my Vox article about Covid published, in which I quote a source saying“Instead of deriding people’s fears about the Wuhan coronavirus, I would advise officials and reporters to focus more on the high likelihood that things will get worse and the not-so-small possibility that they will get much worse.”)
The late January SlateStarCodex open threads also typically contained 10-20 comments discussing the virus, linking prediction markets, debating the odds of more than 500k deaths and how people in various places should expect disruptions to their daily life. (“‘If everyone involved massively bungles absolutely everything, this would be pretty-bad-but-not-apocalyptic.’, a commentator argued on January 29th.)
In late January/early February, I think attitudes were that the virus was a big deal but still more likely than not to be successfully contained, though people should prepare just in case. I think people with our knowledge state wouldn’t’ve bet confidently on a failure of containment on January 30th (some did, but it wasn’t the median community stance), but the markets would have started moving in that direction steadily from very early in February.
I think financial markets not responding until Feb 20th was a clear case of markets doing substantially worse than the people around me.
The plateau beginning early January could be read as an initial reaction to covid.
I wouldn’t expect the markets to react in tandem with the most alarmist rationalists. I participated in a rationalist prediction tournament in mid-January 2020 where only one participant gave COVID >50% odds of killing 10000 people. The EAF post you linked was an unusual view at the time, as were Travis W Fisher’s comments at Metaculus. I grant that the rationalist consensus preceded the market’s reaction, but only by days.
I agree with most of this comment, but
As someone that knows nothing about finance, I don’t understand this point.
If you had bought S&P500 on Feb 20th 2020 you would be up 20% today, so the market not reacting does not seem that irrational in hindsight? Also, US GDP didn’t seem to change that much in 2020 and 2021?
I guess VIX options might have been underpriced, but I think you would need to time them pretty precisely around march?
I know some people in the community made a bunch of money, but in periods of high volatility I expect many people to make some money and many people to lose some money (for example when the market immediately recovered while still in the middle of a pandemic).
I’m not totally sure what I think the correct market behavior based on knowable information was, but it seems very hard to make the case that a large crash on Feb 20th is evidence of the markets moving “in tandem with rational expectations”.
Here’s what I wrote in April 2020 on that topic:
“ A couple weeks ago, I started investigating the response, here and in the stock market, to COVID-19. I found that LessWrong’s conversation took off about a week after the stock market started to crash. Given what we knew about COVID-19 prior to Feb. 20th, when the market first started to decline, I felt that the stock market’s reaction was delayed. And of course, there’d been plenty of criticism of the response of experts and governments. But I was playing catch-up. I certainly was not screaming about COVID-19 until well after that time.
Today, I found the most detailed timeline I’ve seen of confirmed cases around the world. It goes day by day and country by country, from Jan. 13th to the end of March.
That timeline shows that Feb. 21st was the first date when at least 3 countries besides China had 10+ new confirmed cases in a single day (Japan, South Korea, Italy, and Iran).
That changes my interpretation of the stock market crash dramatically. Investors weren’t failing to synthesize the early information or waiting for someone to yell “fire!” They were waiting to see confirmed international community spread, rather than just a few cases popping up here and there. Once they saw that early evidence, the sell-off began, and it continued in tandem, day by day, with the evidence of community spread in new countries and the exponential growth of COVID-19 cases in countries where it was already established.”
Numerous people on rationalityTwitter called it way before Feb 20th, and some of those bought put options and made big profits. This must be some interesting new take on “rational expectations”. https://twitter.com/ESYudkowsky/status/1229529150098046976?s=20&t=IGOl9Mzj1FYtcPYd1F52AQ
Yet the tweets you linked were from 2⁄16 and 2⁄17.
Rational expectations doesn’t mean “the alarmists are always right,” and EMH doesn’t imply that no one can profit helping correct the market.
The tweets you linked demonstrate the confusion at the time. Robin thought that China would be overwhelmed with COVID in a few months, while the rest of the world would be closing contact. In fact the rest of the world got overwhelmed with COVID and crashed their economies in just one month, while China contained it and kept its economy rolling for another two years. Rational expectations would’ve incorporated views like Robin’s, but not parroted them. A plateau from early January and crash on 2⁄20 isn’t inconsistent with that.
It doesn’t seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they’re approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they’re ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it’s fairly commonly known that it’s hard to find ongoing, large-scale biases in financial markets.
The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There’s no assumption that markets are a “mystical source of information” just that if you bet against them you almost always lose.
There’s also a nice “put your money where you mouth is” takeaway from the post, which AFAIK few short timelines people are doing.
I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like “transformative AI is coming soon”, and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn’t otherwise.) But I think you are right that basically no market participants have a probability estimate for transformative AI (or almost anything else) 30 years out; they aren’t trying to make predictions that far out and don’t expect to do significantly better than noise if they did try.
(Even if for some reason you’re wrong for the case of transformative AI specifically, your comment still made me smarter, so thanks! :) )
While this is a very valuable post, I don’t think the core argument quite holds, for the following reasons:
Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in “The Big Short” about the Financial Crisis).
In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that’s not the same as making a billion bucks.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
All that said, it is possible that the strategy of “people with a high x-risk estimate should use long-term loans to fund their work” is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich—it would just be a bet to survive, although you could get poor in the process.
This reasoning sounds pretty tortured to me.
First, should you really believe that the relatively small number of traders needed to move markets won’t come to think AI is a really big deal, given that you think AI is a really big deal?
Second, if “the world won’t realize AI is near for a while,” you can still make money by following analogous strategies to those described in the post. You don’t need the world to realize tomorrow.
I see that I wasn’t being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
I don’t think that you were being unclear above. The underlying reasoning still feels a little tortured to me.
I mean, sure, it could be, but wouldn’t it be weird to believe this confidently? The artists are storming parliament, the accountants are on the dole, foom just around the corner—but a small number of traders have not yet clocked that an important change is coming?
Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb. They will understand the logic of this post. A mass ignoring of interest rates in favor of tech equity investing is not a stable equilibrium.
In order to get the benefits of the best case of anything, you need to take on risk. You could make the same directional bet with less risk. If you weaken this statement to “exposure to a good chunk of the benefits of the implications of their beliefs, by taking on reasonable risk” then the interest rate conclusion still goes through.
Could you try to give an estimate as to how much money would be necessary to move the markets? I’m not particularly familiar with the Treasuries market, but I’m not convinced that a small number of traders or even a few billion dollars per year in “smart money” could significantly change it, at least not enough to send signals separate from surrounding noise about the views.
I think I’ll try and type up my objections in a post rather than a comment—it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.
But in short, I think it’s possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you’d at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian “foom” scenario to happen overnight for the following point to be plausible: “timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won’t make sense to bet on interest rate movements for most people”)
To try to group/summarize the discussion in the comments and offer some replies:
1. ‘Traders are not thinking about AGI, the inferential distance is too large’; or ‘a short can only profit if other people take the short position too’
(a) Anyone who thinks they have an edge in markets thinks they’ve noticed something which requires such a large inferential distance that no one else has seen it.
Any trade requires that the market price eventually converges to the ‘correct’ price
⇒ This argument proves too much – it’s a general argument against ever betting that the market will correct an incorrect price!
Those who are arguing against need to be a clearer argument about why this situation is fundamentally different from any other
Sovereign bond markets are easily some of the most liquid and well-functioning markets ever to exist
(b) Many financial market participants ARE thinking about these issues.
Asset manager Cathie Wood has AGI timelines of 6-12 years and is betting the house on that (“AGI could accelerate growth in GDP to 30-50% per year”)
Masayoshi Son raised $100 billion for Softbank’s Vision Fund on the basis that superintelligence will arrive by 2047
The prospect of AGI is not a Thielian secret.
(c) Do make sure to read section X on “Trade risk and foom risk”, where we acknowledge that if you are both (i) extremely skeptical of market efficiency, and (ii) think foom is the likely takeoff scenario, then trading seems less like a good idea.
2. Stocks versus bonds
Again we refer to detailed discussion in this companion post (appendix 1) on stocks:
(1) Stocks cannot capture the risk of unaligned AI
(2) Developers of TAI might not actually profit much
(3) The developers of TAI might not be publicly traded or even exist yet
(4) The development of TAI could even lower stock prices!
To be clear, though: the economic logic suggests stocks are bad for forecasting timelines (due to the four reasons mentioned in that post)
BUT stocks still could be good ways to earn money betting on short timelines (if the four sources of noise mentioned in that post don’t turn out to hold)
3. Other empirical evidence on real rates
Again we refer to detailed discussion in this companion post (appendix 2) and the important discussion of econometric caveats in section V (“Caveats”)
We emphasize that we would love to have more/better empirical evidence with respect to asset pricing under existential risk (appendix 3)
The challenge with using the historical data (e.g. the Bank of England brought up in the comments) is – as discussed in section IV and in the appendix – that these data are infected with (1) poor estimates of expected inflation and (2) poor estimates of credit risk
For example, in the Schmelzing paper that has been cited: certain claimed spikes in the risk-free rate, e.g. during the Napoleonic war, look far more like a spike in default risk
Or the ex ante real interest rate in World War II in his data is negative, which seems unlikely
Suppose you are one of the 0.1% of macro bonds traders familiar with Yudkowskian foom. You reason as follows: “Suppose that in the next 2 years, we get even more alarming news out of GPT-4 and successors. Suppose it’s so incredibly alarming that 10% of macro traders notice, and then 10% of those hear about Yudkowskian foom scenarios. Putting myself into the shoes of one of those normie macro traders, I think I reason… that most actual normal people won’t change their saving behavior any time soon, even if theoretically they should decrease their saving, and that’s not likely to have macro effects. Still as a normie trader who’s heard about Yudkowsky foomdoom, I think I reason that if Yudkowsky’s right, we’re all dead, and if Yudkowsky’s not right, I’ll get embarrassed about a wrong trade and fired. So this normie trader won’t trade on Yudkowsky foomdoom premises. Therefore I don’t think I can profit over the next two years by shorting a TIPS fund… even leaving aside concerning feelings about whether going hugely short LTPZ would have model risk about LTPZ’s actual relation to real interest rates in these scenarios, or whether other traders would expect big AI impacts to hit measured inflation because of AI-driven lower prices or AI-driven unemployment.”
And then one week before the end of the world, the 1% of most clueful macro bonds traders… will take vacation days early, and draw down their rainy day funds to spend time with their family. They still won’t make macro trades about that, because the payoff matrix looks like “If you’re right, you’re dead and not paid, and if you’re wrong, you’re embarrassed and get fired.” Then haha whoops it turns out that the world didn’t end in a week after all, and people go back to work with a nervous laugh and a sick feeling in their stomachs, and everybody actually falls over dead three weeks later.
If Omega tells you today that everyone will be dead in 2030 with probability 1, there’s no direct way to make a market profit on that private information over the next 2 years, except insofar as foresightful traders today expect less foresightful future traders to hear about AI and read an economics textbook and decide that interest rates theoretically ought to rise and go short TIPS index funds. Foresightful traders today don’t expect this.
To put it another way: Yes, savvy market traders don’t believe that in 2025 everybody will realize that the world is ending. The savvy market traders are correct! Even at the point where the world is ending, everybody will not believe this, and so at no point will the savvy trader have made a profit! The death of all of humanity induces a market anomaly wherein savvy traders don’t expect to be able to profit from everyone else’s error because no event occurs where the real thing actually happens and everybody says “Oops” and the savvy trader gets paid off.
There just isn’t any mystery here. You can’t make a short-term profit off correcting these market prices even if Omega whispers the truth in your ear with certainty. That’s it, that’s the mystery explained, you’re done.
OP seems to ambiguate between two ideas, one true idea, and one false idea.
The true idea is that if Omega tells you personally that the world will end in 2030 with probability 1, you personally should not bother saving for retirement. Call this the Personal Idea.
The false idea is that if you believe in foomdoom, you should go long real interest rates and expect a market profit. Call this the Market Idea.
Intuitively, at least if you’re swayed by this essay, the idea in Market probably seems pretty close to the idea in Personal. If everybody started consuming for today and investing less, real interest rates would go up, right? So if you don’t believe that Market is about as strong as Personal, what invalid reasoning step occurs within the gap between the true premise in Personal to the false conclusion in Market?
Is it invalid that if in 2025 everyone started believing that the world would end in 2030 with probability 1, real interest rates would rise in 2025? Honestly, I’m not even sure of that in real life. People are arguing clever-ideas like ‘Shouldn’t everyone take out big loans due later?’ but maybe the lender doesn’t want to lend anymore, if everyone knows that. There’s a supply collapse and a demand collapse and yes I see the theoretical argument but real-world monetary stuff is in fact really strange and complicated; I didn’t see anybody calling the actual interest-rate trajectory surrounding Covid in advance of it actually playing out.
In real life, what zaps you when you think there’s a worldwide pandemic coming and try to trade interest rates, isn’t that you didn’t know about the pandemic ahead of the oblivious market, it’s that you guessed wrong about what the market would really actually do in real life as the pandemic played out and finally ended.
You can sometimes make a profit off an oblivious market, if you guess narrowly enough at reactions that are much more strongly determined. Wei Dai reports making huge profits on the Covid trade against the oblivious market there, after correctly guessing that people soon noticing Covid would at least increase expected volatility and the price of downside put options would go up.
But I don’t think anybody called “there will be a huge market drop followed by an even steeper market ascent, after the Fed’s delayed reaction to a huge drop in TIPS-predicted inflation, and people staying home and saving and trading stocks, followed by skyrocketing inflation later.”
I don’t think the style of reasoning used in the OP is the kind of thing that works reliably. It’s doing the equivalent of trying to predict something harder than ‘once the pandemic really hits, people in the short term will notice and expect more volatility and option prices will go up’; it’s trying to do the equivalent of predicting the market reaction to the Fed reaction a couple of years later.
The entire OP is written as if we lived in an alternate universe where it is way way easier than history has actually shown, to figure out what happens in broad markets after any sort of complicated or nontrivial event occurs in real life that isn’t on the order of “unexpected Fed rate hike” or “company reports much higher-than-expected profits”. And it’s written in such a way as to mislead EAs reading it about the general confidence that the field of economics is able to justly put in predictions about broad market reactions to strange things.
If you haven’t already looked at OP’s recommended investment instrument of (short) LPTZ, which holds inflation-protected Treasuries and is therefore their recommended way of tracking real interest rates, I recommend the following exercise: First try to figure out what you would have believed a priori, without benefit of hindsight, real interest rates would do over the course of a pandemic. Then, decide what investment strategy you’d have followed with LTPZ if you thought you knew about a pandemic ahead of the market. Then, decide what you think happened to real interest rates with benefit of hindsight. Then, go look at the actual price trajectory of their recommended instrument of LTPZ.
I am not sure I can properly convey this thought that I am trying to convey; I have had trouble actually conveying this thought to EAs before. The thought is that people often do long careful serious-sounding writeups which EAs then take Very Seriously, because they are so long and so seriously argued, but in fact fail to bind to reality entirely, in a way that doesn’t have to do with the details of the complicated arguments. Very serious arguments about what ought to happen to the price of an ETF that tracks 15-year TIPS, via the intermediate step of arguing about what logically ought to happen to real interest rates, are the sort of thing that, historically, average economists have not really been able to pull off; it’s a kind of thought that you should expect fails to achieve basic binding to reality. What would LTPZ or its post-facto equivalent have been doing around the time of the Cuban Missile Crisis? My model says ‘no prediction’; they’ll have done whatever. Afterwards somebody will make up a story about it in hindsight, but it is not the sort of thing where history says that long complicated analyses are remotely reliably good at doing it in advance.
But there are even weaker links in the argument, so let’s accept the LTPZ step arguendo and pass on.
An even bigger problem is that, since everybody is going to die before anything really pays out, the marginal foresightful trader does not have a strong incentive to early-on move the market toward where the market would end up in equilibrium after everyone agreed on the actual facts of the matter and had time to trade about them.
Prediction markets, I sometimes explain to people, are tools for transmitting future observables, or more generally propositions that people expect to publicly agree on at some future point even if they don’t agree now, lossily backward in time, manifesting as well-calibrated probability distributions.
To run a prediction market, you first and foremost need a future publicly observable measurement, which is a special case of a place where we expect most people to agree on an extreme probability assignment later, even though they don’t agree now or don’t make extreme probability assignments now. You cannot run a prediction market on whether supplementing lots of potassium can produce weight loss; you can only run a prediction market about what an experiment will report in the way of results, or what a particular judge will say the experimental evidence seems to have indicated in five years. You cannot directly observe “whether potassium causes weight loss” as an underlying fact of biology, so you can’t have a prediction market about that; you can only observe what somebody reports as an experimental result, or what a particular person appointed as judge says out loud about the state of evidence later.
The marginal foresightful trader usually has a motive to run ahead of the market and make trades now, based on where the equilibrium ought to settle later; not because they are nobly undertaking a grand altruistic project of transmitting facts backward in time and making the market behave nicely from the standpoint of theoretical economics, but because they expect to get paid after everyone makes the common observation and the market settles into a new equilibrium reflecting that state of knowledge. And then they expect to have that money, or to get a bonus for earning that money for their proprietary trading firm, and for that money to still hold its value, and for them to be able to spend the money on nice things.
In the unusual case of foomdoom, even if doom proceeds slowly enough that a large-enough group of marginal foresightful traders see the foomdoom coming, even if there is somehow a really definitive fire alarm that says “extremely high probability you are dead within two to four years”, it is incredibly unlikely that everyone in the world will agree that yes we’ll all be dead in two to four years, and that the markets will settle into the equilibrium that an economist would say corresponds to the belief that we’ll all be dead in two to four years; which is what’s required for the foresightful proprietary trader to score a huge profit and get a big bonus that year and have time to spend it on some fancy way of passing the remaining time.
People do not usually agree on what will happen in two to four years. This kind of agreement that reliably reflects a fact, and makes a market pay out in a way that you can trust to correspond to that fact, is usually achieved after that fact is publicly definitively observed, not two to four years ahead of the observation.
In case of foomdoom the world never settles into equilibrium later, the bets never pay out, there is never that moment where everybody says “What a foresightful trader that was!” and agree on the fact that yes we sure are all dead now. So even if a proprietary trader sees doom coming, they do not have much of an incentive to dutifully transmit that information backward in time in order to make the market behave now in the way that an economist thinks ought to correspond to the equilibrium it would settle into after everybody agreed that they were dead.
That incentive would only exist if you expected everybody to agree that they were going to die in a few years, far enough ahead of everybody actually being dead, for the markets to settle into equilibrium and foresightful traders to collect bonuses on having made the trade before that. Which is a much stronger and stranger thing to claim, about a planet like Earth, than the usual much weaker claim that a few sharp traders might see a fact coming, and move the markets a few years ahead of time to where they would go after everybody agreed on that fact later.
Though even then, of course, we have cases like the financial crisis of 2006-2008, where some traders did see it coming and turn huge profits, but couldn’t move enough marginal money around to actually shift the entire broader market.
To suppose that the market is broken around foomdoom is really not a remotely surprising market behavior to suppose! Even in a world where nearly all the prices are efficient relative to you and that’s why you can’t make 10%/day trading Microsoft stock!
What happened in 2006-2008 was much more broken than that! Marginal traders saw it coming, and some of them won huge even though CDSs were not trivial to short; but they didn’t move enough money to shift anything remotely as large as ‘real interest rates’ ahead of the actual materialization of the disaster.
The market’s behavior around Covid also showed much more obliviousness than this; it showed the kind of obliviousness where people I’d previously marked as the strongest EMH challengers reported collecting vast profits over a timespan of a couple of months. (But on chains of logic much less fraught than OP’s, because in real life you can’t call LTPZ movements or real interest rate changes in advance, just things on the order of ‘buy volatility’.)
We should not believe ‘the market’, in the sense of that unusually intelligent entity whose opinions we actually pay attention to, driven by the highly incentivized marginal trader, has any opinion on AGI except that “in the next few years, not everyone will have started believing that they are going to die in a few years after that”. The market is showing no actual opinion on foomdoom, only on what most market participants will believe about foomdoom in a couple of years. The usual incentive mechanism whereby, if a pandemic starts, in a few years most market participants will agree that this pandemic happened and foresightful traders will collect bonuses and spend them—as is responsible for the market sometimes but not always being foresightful, because it is paid to be foresightful—is in this case broken. We are really always seeing, when the market foretells an observable’s value a few years later, that the foresightful marginal trader thinks that a few years later lots of people will hold a certain opinion; it’s just that usually, this common opinion is being mundanely driven by a direct observation.
The market says, “It won’t be the case that in 2030, everyone agrees that they were killed by AGIs.” The market isn’t saying anything about whether that’s because everyone agrees they are alive, or because nobody is left to agree anything.
OP reads like somebody has heard that markets sometimes anticipate things ahead of them happening, that markets sometimes transmit information lossily backward in time, and doesn’t quite seem to have understood the mechanism behind it; that what everyone will agree on later, unusually foresightful marginal traders can sometimes cause the market to reflect now even though not everyone agrees on it yet. Instead they are talking about “What if the market believed...” as if this kind of market belief reflected a numerical majority of the people in the market believing that foomdoom would kill us all before 30-year bonds paid out. But this is not where the market gets its power to say things that we ought to pay special attention to (though even then market isn’t always right about those things, or even righter than us, especially if those things are a little strange, eg Covid etc). The market gets its power from unusually foresightful marginal traders expecting to get a payoff from what everyone believes later after the thing actually happened and therefore most market participants agree about what happened. And this transmission mechanism is broken w/r/t AGI doom in a way that it wouldn’t even be broken for an asteroid strike; with an asteroid strike, you might get weird effects from money losing its value in the future, but at least people could all agree on the asteroid strike coming. With AGI, I think you’d have to be pretty naive to expect everybody to agree that AGI will kill us in two years, two years before AGI kills us. So it is inappropriate to skip over a step we can usually skip over, and compress the true proposition “The market doesn’t believe that everyone in 2030 will believe that in 2030 everyone is dead”, to the false proposition “The market doesn’t believe that in 2030 everyone will be dead.”
Though again—also just to be clear—AGI ruin is a harder call than Covid and I wouldn’t strongly expect the market to get it right, even if transmission weren’t broken; and even if I thought the market would get it right after seeing GPT-4 and that transmission wasn’t broken, I wouldn’t buy “short LTPZ” as a surefire way to profit.
In appendix 3 the authors cite a paper which looks at more-or-less this precise thing:
It seems like government interest rates didn’t change much? But I don’t think I understand this graph.
Those look like nominal rates, not real rates.
Inflation linked bonds are recent, so virtually all historical analyses are going to use nominal rates. I think this is a tradeoff well worth making to say something about asset pricing under existential risk. For the effects to be wrong because of nominality, the Cuban missile crisis would have had to affect market perceptions primarily because of… inflation expectations. I feel comfortable rejecting that as a story.
I’m a bit confused by this comment. Is the claim that you can’t profit from taking out a loan that you have to pay back in 2040? The fact that you get money now and have to pay it back at a time when hypothetically money no longer matters seems like profit to me. If Omega whispers the truth into my ear that in 2040 there will be foom, then that’s guaranteed profit.
Nobody will give you an unsecured loan to fund consumption or donations with most of the money not due for 15+ years; most people in our society who would borrow on such terms would default. (You can get close with some types of student loan, so if there’s education that you’d experience as intrinsically-valued consumption or be able to rapidly apply to philanthropic ends then this post suggests you should perhaps be more willing to borrow to fund it than you would be otherwise, but your personal upside there is pretty limited.)
It might be challenging to borrow (though I’m not sure), but there seem to be plenty of sophisticated entities that should be selling off their bonds and aren’t. The top-level comment does cut into the gains from shorting (as the OP concedes), but I think it’s right that there are borrowing-esque things to do.
The reason sophisticated entities like e.g. hedge funds hold bonds isn’t so they can collect a cash flow 10 years from now. It’s because they think bond prices will go up tomorrow, or next year.
The big entities that hold bonds for the future cash flows are e.g. pension funds. It would be very surprising and (I think) borderline illegal if the pension funds ever started reasoning, “I guess I don’t need to worry about cash flows after 2045, since the world will probably end before then. So I’ll just hold shorter-term assets.”
I think this adds up to, no big investors can directly profit from the final outcome here. Though as everyone seems to agree, anyone could profit by being short bonds (or underweight bonds) while the market started to price in substantial probability of AGI.
If you’re in charge of investing decisions for a pension fund or sovereign wealth fund or similar, you likely can’t personally derive any benefit from having the fund sell off its bonds and other long-term assets now. You might do this in your personal account but the impact will be small.
For government bonds in particular it also seems relevant that I think most are held by entities that are effectively required to hold them for some reason (e.g. bank capital requirements, pension fund regulations) or otherwise oddly insensitive to their low ROI compared to alternatives. See also the “equity premium puzzle”.
I don’t know whether I defend Yudkowsky’s view (I only skimmed), but:
The “profit” in the scenario you describe doesn’t seem sufficient to move the market (as small investors), because it isn’t “profit” that you can keep growing and reinvesting (snowballing) until your Special Insights (TM) fix the market;
If you end up with a scenario where 2040 arrives and it’s Utopia, then there may not be any serious incentive to care, whereas there would be downside risks if it doesn’t occur: if you’re right, cool, you lived slightly better for 20 years out of a 1,000,000,000-year life of happiness; if you’re wrong you might be in so much debt you can’t pay medical bills/whatever, and die before actual AGI date.
I’m not confident you could get massive loans with no collateral and no business model or whatnot to repay the loans come 2040.
Beyond just taking vacation days, if you’re a bond trader who believes in a very high chance of xrisk in the next five years it probably makes sense to quit your job and fund your consumption out of your retirement savings. At which point you aren’t a bond trader anymore and your beliefs no longer have much impact on bond prices.
Or leave your job to go try and reduce x-risk?
Insofar as humans care about x-risk and act on this care, you’d expect bond traders to be people who either think x-risk is very low, think it’s high and think they have no nontrivial way to affect it, or think it’s high and think trading is their best way to affect it (e.g., through earning to give).
People disagreeing, would you say why?
My guess for the pushback: 1 week before the end of the world, you think a sizable part of the population will notice and change their economic behavior drastically. I imagine this scenario contains a slow “attack” by AI that everyone sees coming?
(agree vote = yeah that is the pushback)
If the only scenario for AI were: exist at more or less normal levels of economic growth, then foom, I think Eliezer would be correct. However, I think a likely scenario is that there is accelerated growth, coincident with accelerated risk (via e.g. AI terrorism or wars), before foom. This cluster of outcomes may even be modal. In that case, interest rates would very likely rise and traders would be able to profit, before foom.
Put another way, we often place substantial weight on non-foom good and bad AI scenarios, even if foom is a risk. The market seems to be ruling out non-foom AI risk as well as non-foom AI growth before foom. Eliezer is correct that the market may not be ruling out the (IMO unlikely) scenario of “no-AI induced growth or risk prior to foom, then foom.”
I’m trying to make sure I understand: Is this (a more colorful version) of the same point as the OP makes at the end of “Bet on real rates rising”?
I wouldn’t say that I have “a lot of” skepticism about the applicability of the EMH in this case; you only need realism to believe that the bar is above USDT and Covid, for a case where nobody ever says ‘oops’ and the market never pays out.
What do you make of the ‘impatient philanthropy’ argument? Do you think EAs should be borrowing to spend on AI safety?
Not until timelines are even blatantly shorter than present and long-term loans are on offer, and not unless there’s something useful to actually do with the money.
Am I being dumb or do you mean short TIPS? If real interest rates rise, TIPS go down.
Yes—you are correct.
I trade global rates for a large hedge fund so I think i can give the inside view on how financial market participants think about this.
First, the essential claim is true—no one in rates markets talks about the theme of AI driving a massive increase in potential growth.
However, even if this did become accepted as a potential scenario it would be very unlikely to show up in government bond yields so using yields as evidence of the likelihood of the scenario is, imho, a mistake. I’ll give a number of reasons.
Rates markets don’t price in events (even ones that are fully known) more than one or two years ahead of time (Y2K, contentious elections in Italy or France, etc). This is generally outside participants time horizons but also...
A lot can happen in two years (much less ten years). Major terrorist attack, pandemic, nuclear war to name three possibilities all of which would fully torpedo any bet you would make on AI, no matter how certain you are of the outcome.
The premise is not obviously true that higher growth leads to higher real yields. That is one heuristic among many when thinking about what real yields should do. It’s important to think about the mechanism here—if real yields are well below the economic growth rate then businesses are incentivized to borrow money to invest in projects which earn that growth rate. But this is really a return on capital phenomenon, not a growth rate phenomenon. It’s much more applicable to a scenario of humanoid robots where you have to invest large amounts of money up front to yield a return in wage savings over time. Zero marginal cost software doesn’t require the same kind of capital investment so there is no surge in borrowing necessary to attain the economic growth.
The heuristic of rates=growth is really better thought of as what rate is necessary to equilibrate savings and investment. So even a massive demand for investment funds can be offset by willing savers. In the case where gains from AI were evenly distributed it’s plausible to say the income effect is such that people generally will feel richer and thus want to consume more today (and will require a high interest rate to push consumption from today to tomorrow). But in reality the wealth gains from AI will probably accrue to a small slice of the population. Rich people have very low marginal propensities to consume (and high marginal propensities to save). So a surge in wealth to rich people is generally thought to push interest rates down.
Saving the most practical point for last: Interest rates in developed economies are heavily influenced by where the Fed wants them to be in order to fulfill their mandate of full employment and 2% inflation. If there were a huge speculative increase in interest rates in anticipation of a growth surge ten years from now that would torpedo current borrowing and thus the economy. So the Fed would fight back by cutting the overnight interest rate and, if necessary, purchasing bonds. The Fed’s ammo is essentially unlimited and they would get their way and you would lose money.
A couple of questions related to this, not directly relevant but I’ve been wondering about them and you might know something:
How to square interest rate = return on capital with the fact that, for most of human history, the growth rate was close to zero and the interest rate was significantly higher?
How does this account for risk? The (risk-free) interest rate is risk-free, and the return on capital is risky—it fluctuates over time, and sometimes it’s negative. So shouldn’t the growth rate be higher than the interest rate? (I think the long-run real growth rate is usually higher than the real interest rate—about 1–2% and 0–1% respectively IIRC, which might be the answer.)
Cheers, I thought this comment was very informative
Nice post, but my rough take is there is
it’s relatively common markets are inefficient, but unexploitable; trading on “everyone dies” seems a clear case of hard-to-exploit inefficiency
markets are not magic; impacts of one-off events with complex consequences are difficult to price in, and what all the magical market aggregation boils down to are a bunch of human brains doing the trades; e.g. I was able to beat the market and get n-times return at point where markets were insane about covid; later, I talked about it with someone in one of the giant hedge funds, and the simple explanation is, while they were looking into it, at some point I knew more about covid than they were able to assemble
example of such hard-to-predict event are e.g. capabilities and impacts of a specific model
the dichotomy 30% growth / everyone dies is unrealistic for trading purposes
near term, there are various outcomes like “industry X gets disrupted” or “someone loses job due to automation” or “war”
if you anticipate fears of this type to dominate in next 10 years, you should price many people increasing their savings and borrowing less
I think there are a ton of Transformative AI scenarios where not “everyone dies”. I think many AI Safety researchers are currently expecting less than a 40% chance of everyone dying.
I also really have a hard time imagining many financial traders actually seriously believing:
1. Transformative AI is likely to happen
2. It’s very likely to kill everyone, conditional on happening. (95%++)
Both of those are radical right now. You need to believe (1) to believe we’re likely doomed soon.
I haven’t seen any evidence of people with money seriously discussing (2).
Sounds like I’d like a hedge fund to write the news for me (after they trade on it, no problem. but they must have great teams doing the analysis)
I’m curating this post because I think it raises important points that I’d like more people to engage with, and because the discussion on it has been really interesting (124 comments right now). I also think it’d be very good to have more genuine critical engagement with the case for prioritizing work on AI existential risk.
However, the post seems wrong or at least heavily disputable in important ways, so I will also point out relevant considerations and comments. Please note that I’m not an expert; in some cases, I’m merely summarizing my understanding of what others have said.
This comment ended up being very long. Before it goes on, I want to direct attention to some other content arguing against the case that risk from AI is high or should be prioritized:
But have they engaged with the arguments? (Phil Trammell)
Counterarguments to the basic AI risk case (Katja Grace)
How sure are we about this AI stuff? (Ben Garfinkel)
This thread by Matthew Barnett (which is responding to arguments against AGI being near)
Wikipedia’s section on skepticism about risk from AGI
The overall claim that the post is making is the following (I think):
Markets are a good way of finding an outside view on a topic — in this case, on transformative AI. Long-term real rates would be higher than they currently are if markets believed that transformative AI was coming in the next ~30 years. So you should expect longer timelines for transformative AI. Also, if you really believe that transformative AI is coming soon, you can make money by disagreeing with the market’s current position (by shorting bonds).
One thing that seems worth pointing out before we get into the disagreements: the arguments in the post rely on the idea that, if transformative AI is coming at a certain time, we should expect that more and more people will believe this as that time draws near. This doesn’t seem unreasonable to me, but I didn’t see it outlined as an assumption in the post.
Moving on: some disagreements (a non-exhaustive list):
Interpreting or correcting markets like this is a messy business, especially when trying to predict events that are years out, especially when there’s not a lot to profit from in the near-term or continuously. And the crazier things get, the more unusual things markets might do. As Jan points out,
“the dichotomy 30% growth / everyone dies is unrealistic for trading purposes
near term, there are various outcomes like “industry X gets disrupted” or “someone loses job due to automation” or “war”
if you anticipate fears of this type to dominate in next 10 years, you should price many people increasing their savings and borrowing less”
And Eliezer writes:
You can sometimes make a profit off an oblivious market, if you guess narrowly enough at reactions that are much more strongly determined. Wei Dai reports making huge profits on the Covid trade against the oblivious market there, after correctly guessing that people soon noticing Covid would at least increase expected volatility and the price of downside put options would go up.
But I don’t think anybody called “there will be a huge market drop followed by an even steeper market ascent, after the Fed’s delayed reaction to a huge drop in TIPS-predicted inflation, and people staying home and saving and trading stocks, followed by skyrocketing inflation later.”
I don’t think the style of reasoning used in the OP is the kind of thing that works reliably. It’s doing the equivalent of trying to predict something harder than ‘once the pandemic really hits, people in the short term will notice and expect more volatility and option prices will go up’; it’s trying to do the equivalent of predicting the market reaction to the Fed reaction a couple of years later.
And from this comment: Expecting the market to be (decently) efficient here is less reasonable, because
“Suppose you have someone who has better insights than everyone else about some asset. They may not be rich and for various related reasons they are unable to immediately correct the market (i.e., the market is actually temporarily inefficient). However, if they are [right], they either a) can keep profiting over and over again until they become liquid/rich enough to individually correct the market, and/or b) other people see that this person is profiting over and over again so they jump in and contribute to market correction.”
The problem is that it might be the case that there is only one or two cycles for profit with AGI until the world goes crazy, but it could take many years for this strategy to actually profit, during which time the market will be “temporarily” inefficient. If real interest rates don’t rise for 15 years, and only start to rise ~5 years before AGI, the market is inefficient for 15 years because small players can’t profit to fix the situation.
Markets are really bad at dealing with extreme events
The classic argument I’ve seen for prediction markets goes like this: prediction markets work because people can expect to be paid for being right. Say someone sets up a market on whether humanity is extinct by 2030. If we’re not extinct, traders who bought “not extinct” shares profit. If we are… no one profits. So you don’t really have an incentive to buy “extinct” shares, even if you know for a fact that humanity will go extinct before 2030. (There are cleverer ways to set up this kind of market, but we’ll skip them for now.) (Related post/section.) (Also explained in this comment, among others.)
This applies here, to a certain extent, and several commenters point this out. The post tries to get around this by focusing on real rates, but there’s only a limited amount of profit possible if the markets are incorrect in this direction; as suggested, you can stop saving past 2030 (or discount according to your timelines) and try to short government debt, but this is not enough to profit massively and not enough to fully correct the markets if they are off in this way.
Jakob points this out in a comment which also notes, “You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this.” Or, as explained in other comments, “If [you find out] that everyone will be dead in 2030 with probability 1, there’s no direct way to make a market profit on that private information over the next 2 years, except insofar as foresightful traders today expect less foresightful future traders to hear about AI and read an economics textbook and decide that interest rates theoretically ought to rise and go long TIPS index funds. Foresightful traders today don’t expect this.” (More on this.)
One proposed mechanism for profiting off something like knowing that humanity will go extinct by 2030 is taking a big loan that you don’t expect to pay out. However, lenders would generally ask for a big collateral/margin on a loan with such a long timeline (otherwise lots of people would take loans like this and default). Additionally, if the treasury rate went down in the short-term, the lender might close out the trade at a loss for the borrower, even if the borrower is right on a longer time span. (A commenter points out that there might be student loans that work for this.)
I think the examples listed in the “Empirical evidence on real rates and mortality risk” section are importantly different from transformative AI because (1) the catastrophes aren’t existential (see above, on extreme events) and (2) in those cases there’s info that some people have — insider trading, which we don’t necessarily have here but which is good for correcting markets. So I think you should expect this approach to work better for the examples than for the AI case.
I don’t know what happened in situations that seem slightly more relevant, like the Cold War, and I would be interested in hearing more if someone knows or can look into it. But even those situations seem very different.
A number of people in the comments are discussing whether enough people trading/investing/borrowing etc. are aware enough of arguments about transformative AI to be able to form an opinion on this and correct the market if the market is in fact off in this direction. Rohin writes: “If you already knew that belief in AGI soon was a very contrarian position (including amongst the most wealthy, smart, and influential people), I don’t think you should update at all on the fact that the market doesn’t expect AGI.”
″… What’s going wrong, I think, is something like this. People encounter uncommonly-believed propositions now and then, like “AI safety research is the most valuable use of philanthropic money and talent in the world” or “Sikhism is true”, and decide whether or not to investigate them further. If they decide to hear out a first round of arguments but don’t find them compelling enough, they drop out of the process. (Let’s say that how compelling an argument seems is its “true strength” plus some random, mean-zero error.) If they do find the arguments compelling enough, they consider further investigation worth their time. They then tell the evangelist (or search engine or whatever) why they still object to the claim, and the evangelist (or whatever) brings a second round of arguments in reply. The process repeats.
As should be clear, this process can, after a few iterations, produce a situation in which most of those who have engaged with the arguments for a claim beyond some depth believe in it. But this is just because of the filtering mechanism: the deeper arguments were only ever exposed to people who were already, coincidentally, persuaded by the initial arguments. If people were chosen at random and forced to hear out all the arguments, most would not be persuaded. …”
Related, I think: The motivated reasoning critique of effective altruism, Epistemic learned helplessness
I can’t find a quick link about it that concisely explains just the argument — would appreciate one!
ChatGPT’s explanation of what “shorting government debt” means:
Thanks for curating the post!
Just a quick comment to highlight the responses which we have given to the list of disagreements, and to tweak your summary a bit to better reflect what I (not to speak for my other two co-authors) see our post as saying:
Edit: As it turns out, there’s a nice third party summary which even more concisely captures the essence of what we are trying to get across!
I appreciate the summary, and I’m especially glad to see it done with an emphasis on relatively hierarchical bullet points, rather than mostly paragraph prose. (And thanks for the reference to my comment ;)
Nonetheless, I am tempted to examine this question/debate as a case study for my strong belief that, relative to alternative methods for keeping track of arguments or mapping debates, prose/bullets + comment threads are an inefficient/ineffective method of
Soliciting counterarguments or other forms of relevant information (e.g., case studies) from a crowd of people who may just want to focus on or make very specific/modular contributions, and
Showing how relevant counterarguments and information relate to each other—including where certain arguments have not been meaningfully addressed within a branch of arguments (e.g., 3 responses down), especially to help audiences who are trying to figure out questions like “has anyone responded to X.”
I’m not even confident that this debate necessarily has that many divisive branches—it seems quite plausible that there are relatively few cruxes/key insights that drive the disagreement—but this question does seem fairly important and has generated a non-trivial amount of attention and disagreement.
Does anyone else share this impression with regards to this post (e.g., “I think that it is worth exploring alternatives to the way we handle disagreements via prose and comment threads”), or do people think that summaries like this comment are in fact sufficient (or that alternatives can’t do better, etc.)?
I spent about an hour today trying to convince a friend that works in private equity that OpenAI is undervalued at $30B. I pitched him on short AI timelines and transformative growth, and he didn’t disagree with those arguments directly. He mostly questioned whether OpenAI would reap the benefits of short timelines. A few of the points:
It’s a competitive industry with other players on par or not far behind. Google, Meta, and Anthropic are there already, and startups like Stability and Cohere could quickly close the gap. This is especially true if “scale is all you need”, rather than human capital or privately generated data.
The main opportunity is B2B, not B2C. Businesses are more cost sensitive and interested in cheaper alternatives than consumers, who gladly accept name brands.
Profits often lag behind research breakthroughs for years and even decades. There’s no billion dollar app for GPT yet. Investors “don’t care about anything that’s more than 15 years away.”
IMO these are boring economic arguments that don’t refute the core thesis of short timelines or AI risk. OpenAI is getting a similar evaluation to Grammarly, which also sells an LLM product, but with worse tech and better marketing. It’s being evaluated on short term revenue prospects more than considerations about TAI timelines.
If OpenAI is working closely with Microsoft than MSFT becomes an extremely attractive investment. Microsoft is in talks to invest 10B. If this goes through I would strongly advise investing in MSFT. Seems like one of the best ways to get exposure.
MSFT is already valued at $1.7T (it’s the 3rd largest company in the world). What multiple do you think is realistic over the next 5-10 years? Or would you suggest call options? (If Microsoft owns 1⁄3 of OpenAI with this latest deal, that still only represents 0.6% of MSFT marketcap. Also OpenAI profits are meant to be capped at 100x; with that you’d only get a 1.6x on your MSFT.)
MSFT is valued at 1.7T using scenarios that don’t assume transformative AI.
If you really think TAI will happen soon, and think they will dominate, I think the EV of MSFT would go up much further.
How much further? And how much relative to the stock market as a whole?
The stock market as a whole could become significantly larger.
I just did a quick search:
- The total S&P 500 Market Cap is ~$33T
- Total Global Wealth is ~$460T
In comparison, FAANG tech stocks might be valued at like ~$6T?
[Edit: Changed “tech stocks” to “FAANG tech stocks”]
Tech as a whole went up a lot after COVID, then down in the last year or so. This helps demonstrate how much the entire sector can change in value.
(In addition, if a ton wealth went started valuing tech stocks higher, it would take less than $1 to raise the value of them by $1. Maybe much less than $1. A great deal of global “wealth” is really just “expectations of value or future earnings”. So as expectations rise, the S&P 500 and I believe “Global Wealth” would rise as well.)
According to companiesmarketcap.com, Apple, Microsoft, Google and Amazon alone are worth $6 trillion. I am not familiar with an estimate of the total market size, but I suspect it is substantially higher than $6 trillion?
Sorry—I was thinking, and should have specific, about the FAANG stocks, plus maybe NVIDIA and one or two AI-specific companies. All of tech globally is significantly larger, though I think most of these companies likely won’t substantially compete in the AI race.
If Microsoft takes the lead among big tech companies then the market cap doubling in five years would be reasonable. Though its unclear they will pull that off. If timelines are fast and Microsoft stays in the lead 10T isn’t crazy. It’s also worth noting that even if the AI thesis doesn’t play out Microsoft is a 25PE blue chip with very capable management. So the downside here is pretty low as far as buying tech stocks goes. Buying call options requires getting the timing right.
Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios—along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief
1) that there will be no AI-driven growth explosion,
as you discuss—but also with some AI-growth-explosion-compatible beliefs investors might have, including
2) that future growth could well be very fast or very slow, and
3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse).
So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.
To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don’t see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)
This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.
Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.
Thanks for these comments!
The short answer here is: yes agreed, the level of real interest rates certainly seems consistent with “market has some probability on TAI and some [possibly smaller] probability on a second dark age”.
Whether that’s a possibility worth putting weight on—speaking for myself, I’m happy to leave that up to readers.
(ie: seems unlikely to me! What would the story there be? Extremely rapid diminishing returns to innovation from the current margin, or faster-than-expected fertility declines?)
As you say, perhaps the possibility of the stagnation/degrowth scenario would have other implications for other asset prices, which could be informative for assessing likelihood.
For what it’s worth, I suspect many readers do think there’s some chance of stagnation (i.e. put 5% credence or more). Will MacAskill devotes an entire chapter to growth stagnation in What We Owe the Future. In fact he thinks it’s the most likely of the four future trajectories discussed in the book, giving it 35% credence (see note 22 to chapter 2, p. 273-4).
The Samotsvety forecasters think this is too high, but each still puts at least 1% credence on the scenario and their aggregated forecast is 5%. Low, but suggesting it’s worth considering.
I am confused. This is not a lot of return at 10 year timelines, and calling it “mouth-watering” seems a bit excessive. A cumulative return of 162% over 10 years is equivalent to around 10% annually, which is around as good as putting a money into a normal ETF over the last 20 years (which averaged around 8-10% over the last 20 years), and this is not considering that since I am short the market, in any world where I am wrong, I will likely lose a lot of money. If I take the downside into account, I end up with an annualized return of around 5-6% on this portfolio, which really doesn’t seem great to me.
Taking the math here at face value, you are suggesting a trading strategy with a ~5-6% annualized return, which is maybe barely competetive with other investing strategies, and a lot less than the average return of most EA-adjacent investor who have been thinking about this stuff for the last few years. Like, yeah, maybe you can very slightly outperform the market, but this seems hardly like a strong correcting force, and this definitely doesn’t seem like a good pitch that I should move my investment away from just holding tech-concentrated ETFs with a bunch of Nvidia exposure.
I might just be misunderstanding the returns you are claiming here, and the exact financial trade. Am I missing anything? What trading strategy do you concretely recommend that will make me money if I believe that timelines are shorter than this that outperforms something simple like buying Nvidia stock (and ideally will make me money at least a year or two before I do expect the world to end, since I don’t think I really gain much by having more money just before AI kills everyone)?
(Edit: Adjusted calculations for 10 years instead of 18 years)
where did the 16 years you mention come from? don’t a lot of people have AI timelines shorter than that?
Sorry, I meant to say 18 years (now edited), which is the number of years from the quote (“with a duration of 18 years”), which is presumably the time for the treasuries to mature, though I guess if you expect the market to move earlier than that, you should also expect returns earlier than that.
Let’s plug in the 10 year number instead, which the OP argues is “decisively rejected” and then calculate the expected return of this instrument given expected growth.
In the last 10 years the TTT ETF had a return of approximately −85%. I sure feel really confused about whether it makes to forecast a negative return for an asset like this, and to expect this trend to continue, especially in a conditional case like this, but I sure feel quite confident taking a bet that in the business-as-usual case your triple-leveraged bet against treasuries will have a negative expected return somehow (since it’s primary purpose is usually hedging).
So, I don’t know, my guess is going short the market in the business-as-usual world is a pretty bad idea, and you will probably indeed lose 80% of your investment over the next 10 years, if AI doesn’t happen.
So, assuming that I am betting $10,000 on this, and I am 50% confident on 10 year timelines. Then in this world I will make ($10k∗2.62)−($10k∗0.8)=$18.2k, i.e. an 80% return over 10 years, which is around a 5% annual return, and so lower than historical performance of ETFs.
Duration doesn’t mean time to maturity. It’s a measure of bond sensitivity to interest rates. Higher duration = more sensitivity. It’s measured in years tho which is confusing. You can make your 162% in one year if the interest rates move as the authors say, which is pretty mouth-watering!
(edit: just to showcase the degree of difference, a bond with 40 years to maturity can have duration of just 10 years if the bond’s coupon value is 10% & market rates were also 10% at the time of isssuance. This means the value of the bond will change less with rising rates. A 40-year bond with a 1% coupon, issued when market rates were also 1% will have duration of around 33 years, which in plain English just means that if interest rates go up a lot you get “FTX-linked tokens in November” returns. Bonds are tricky things and their pricing is weird is especially in ZIRP environments)
Yep, makes sense. I adjusted my calculations to be about 10 years, to more directly reflect the post, which doesn’t seem to change much.
Agree that if all the other details checked out, and you had 1-2 year timelines, this might imply a higher expected return, but I don’t currently see why it would imply that the markets decisively reject 10 year timelines, even if you buy the rest of the model (which I also have a bunch of other critiques of).
This post’s thesis is that markets don’t expect AGI in the next 30 years. I’ll make a stronger claim: most people don’t expect AGI in the next 30 years; it’s a contrarian position. Anyone expecting AGI in that time is disagreeing with a very large swath of humanity.
(It’s a stronger claim because “most people don’t expect AGI” implies “markets don’t expect AGI”, but the reverse is not true. (Not literally so—you can construct scenarios like “only investors expect AGI while others don’t” where most people don’t expect AGI but the market does expect AGI—but these seem like edge cases that clearly don’t apply to reality.))
Personally I feel okay disagreeing with the rest of humanity on this, because (a) the arguments seem solid to me, while the counterarguments don’t, and (b) the AGI community has put in much more serial thought into the question than the rest of humanity.
If you already knew that belief in AGI soon was a very contrarian position (including amongst the most wealthy, smart, and influential people), I don’t think you should update at all on the fact that the market doesn’t expect AGI.
If you didn’t know that, consider this your wake up call to reflect on the fact that most people disagree with you. You don’ t need to think about financial markets or real interest rates or what trades would make you rich; all of those effects are downstream of the fact that most people disagree with you.
(Separately, I do expect you can bet on belief in AGI through financial markets, since I do expect that if AGI is coming soon the rest of the world will eventually figure that out. But it’s not clear when you can expect to make money; that depends on when exactly the rest of humanity figures it out.)
I’m not sure that’s true. Markets often price things that only a minority of people know or care about. See the lithium example in the original post. That was a case where “most people didn’t know lithium was used in the H-bomb” didn’t imply that “markets didn’t know lithium was used in the H-bomb”
^This is an extremely, extremely important point!
Market prices are not a democracy. The logic for the efficiency of markets is emphatically NOT ‘wisdom of the crowds’. It’s that the most knowledgeable traders have the most to gain from trading, and so do so, and determine the price. (I have a riff on this here)
Hm, Rohin has some caveats elaborating on his claim.
Unless they were edited in after these comments were written (which doesn’t seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.
Sorry, I stand by my comment 110%.
I want to maximally push back on views like this. The economic logic for the informational efficiency of markets has nothing to do with consensus or ‘non-contrarianness’. Markets are informationally efficient because of the incentive for those who are most informed to trade.
The argument here emphatically cannot be merely summarized as “AGI soon [is] a very contrarian position [and market prices are another indication of this]”.
If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.
Yes, it’s weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren’t enough to drastically change asset prices against the counter trades of other investors.
There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they’re not going to consider it). E.g. they don’t report probabilities it would happen or make any estimates of the profits it would produce.
Rohin is right that AGI by the 2030s is a contrarian view, and that there’s likely less than $1T of investor capital that buys that view and selects investments based on it.
I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn’t overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they were swamped by partisan ‘dumb money.’ The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they’re still not enough for the most informed people to become a large share of the relevant pricing views on these assets.
1. We would welcome engagement from you regarding our argument that stock prices are not useful for forecasting timelines (the sign is ambiguous and effect noisy).
2. You offer what is effectively a full general argument against market prices ever being swayed by anything—a bit more on this point here. Price changes do not need to be driven by volume! (cf the no-trade theorem, for the conceptual idea)
3. I’m not sure if this is exactly your point about prediction markets (or if you really want to talk about total capital, on which see again #2), but:
Sovereign debt markets are orders of magnitude larger than PredictIt or other political prediction markets. These are not markets where individual traders are capped to $600 max positions and shorting is limited (or whatever the precise regulations are)! Finding easy trades in these markets is …not easy.
But the stocks are the more profitable and capital-efficient investment, so that’s where you see effects first on market prices (if much at all) for a given number of traders buying the investment thesis. That’s the main investment on this basis I see short timelines believers making (including me), and has in fact yielded a lot of excess returns since EAs started to identify it in the 2010s.
I don’t think anyone here is arguing against the no-trade theorem, and that’s not an argument that prices will never be swayed by anything, but that you can have a sizable amount of money invested on the AGI thesis before it sways prices. Yes, price changes don’t need to be driven by volume if no one wants to trade against. But plenty of traders not buying AGI would trade against AGI-driven valuations, e.g. against the high P/E ratios that would ensue. Rohin is saying not that the majority of investment capital that doesn’t buy AGI will sit on the sidelines but will trade against the AGI-driven bet, e.g. by selling assets at elevated P/E ratios. At the moment there is enough $ trading against AGI bets that market prices are not in line with the AGI bet valuations. I recognize that means the outside view EMH heuristic of going with the side trading more $ favors no AGI, but I think based on the object level that the contrarian view here is right.
It’s just a simple illustration that you can have correct minorities that have not yet been able to grow by profit or imitation to correct prices. And the election mispricings also occurred in uncapped crypto prediction markets (although the hassle of executing very quickly there surely deterred or delayed institutional investors), which is how some made hundreds of thousands or millions of dollars there.
Can you describe in concrete detail a possible world in which:
“AGI in 30 years” is a very contrarian position, including amongst hedge fund managers, bankers, billionaires, etc
Market prices indicate that we’ll get AGI in 30 years
It seems to me that if you were in such a situation, all of the non-contrarian hedge fund managers, bankers, billionaires would do the opposite of all of the trades that you’ve listed in this post, which would then push market prices back to rejecting “AGI in 30 years”; they have more money so their views dominate. What, concretely, prevents that from happening?
Minor (yet longwinded!) comment: FWIW, I think that:
Rohin’s comment seems useful
Stephen’s and your rebuttal also seem useful
Stephen’s and your rebuttal does seem relevant to what Rohin said even with his caveat included, rather than replying to a strawman
But the phrasing of your latest comment feels to me overconfident, or somewhat like it’s aiming at rhetorical effect rather than just sharing data and inferences, or somewhat soldier-mindset-y
In particular, personally I dislike the use of “110%”, “maximally”, and maybe “emphatically”.
My intended vibe here isn’t “how dare you” or “this is a huge deal”.
I’m not at all annoyed at you for writing that way, I (think I) can understand why you did (I think you’re genuinely confident in your view and feel you already explained it, and want to indicate that?), and I think your tone in this comment is significant less important than your post itself.
But I do want to convey that I think debates and epistemics on the Forum will typically be better if people avoid adding such flourishes/absolutes/emphatic-ness in situations like this (e.g., where the writing shouldn’t be optimized for engagingness or persuasion but rather collaborative truth-seeking, and where the disagreed-with position isn’t just totally crazy/irrelevant). And I guess what I’d prefer pushing toward is a mindset of curiosity about what’s causing the disagreement and openness to one’s own view also shifting.
(I should flag that I didn’t read the post very carefully, haven’t read all the comments, and haven’t formed a stable/confident view on this topic. Also I’m currently sleep-deprived and expect my reasoning isn’t super clear unfortunately.)
I also think the comment is overconfident in substance, but that’s something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)
For the record, these were not edited in after seeing the replies. (Possibly I edited them in a few minutes after writing the comment—I do that pretty frequently—but if so it was before any of the replies were written, and very likely before any of the repliers had seen my comment.)
At times like these, I don’t really know why I bother to engage on the EA Forum, given that people seem to be incapable of engaging with the thing I wrote instead of some totally different thing in their head.
I’ll just pop back in here briefly to say that (1) I have learned a lot from your writing over the years, (2) I have to say I still cannot see how I misinterpreted your comment, and (3) I genuinely appreciate your engagement with the post, even if I think your summary misses the contribution in a fundamentally important way (as I tried to elaborate elsewhere in the thread).
I’m impressed by the belief you have in the AGI community to have much more insight to the future of the rest of humanity than the rest of humanity! You and the AGI community are either very insightful or very delusional and I’m excited to find out which one as time progresses and if I’m lucky to live long enough to see how these short time forecasts play out. I wonder if it follows that you also believe that the topmost percentile of thinkers or prognosticators out of all the ~8 billion humans currently alive are in the AGI community? I also admire that despite the fact that you don’t agree with most of humanity you are still willing to work on preventing AI X-risk and save us! That’s some true altruism right there!
I am issuing supesanon a warning for this comment. Let’s keep it collaborative and not snarky.
This is one way of looking at the data (with my overlapping data claim already noted).
Here is another (much longer) way of looking at the data.
Here is 800 years of history courtesy of the BoE
Eyeballing that, I’d say the relationship is strongly negative...
Interesting. What would be the theoretical explanation for a negative relationship?
I would guess some combination of:
Increasing longevity (which note the authors say has an effect in the FOOM scenario, but not in the aligned scenario...)
Decreasing credit risk (what was ‘risk free’ in the 1400s is very different to what is ‘risk free’ today)
Consumption preferences being correlated to growth
I don’t really have a strong opinion on any of these—macro is really hard and really uncertain. To quote a friend of mine:
Borrowing and consuming because AGI is coming seems an incredibly risky proposition.
Yeah I agree that the AGI could also make you want to save more. One factor is that higher interest rates can mean it’s better to save more (depending on your risk aversion). Another factor is that it could increase your lifespan, or make it easier to convert money into utility (making your utility function more linear). That it could reduce the value of your future income from labour is another factor.
I’ve been discussing this concept for some time now, so I’m glad to see some people take a more formal stab at it. However, I must say that I’m overall disappointed with this post. I’ll just lay out a few summary points, and if people are actually still reading this deep into the comments and want to hear more thoughts, I can oblige later:
With the *slight* exception of the “you could be earning alpha” section, it does not really get deep into the causal mechanisms for why you should expect markets to be efficient.
I think this post should have done a better job of aggregating and responding to contrary viewpoints; I feel like the post largely bypassed the key arguments (cruxes) of existing critics and went straight to people who were not familiar with the EMH+AGI debate , especially with all the references to empirical evidence (see next point).
Granted, “better job” implies that the article did this at all, which I don’t recall it really doing, aside from occasional references to other viewpoints (IIRC).
The empirical sections I thought were decent, but they missed the crux of the debate.
The fact is, we don’t seem to have much of any precedent of this kind of scenario, with some debatable exceptions regarding the Cold War / Cuban Missile Crisis—yet the authors didn’t even spend that much time focusing on these examples which seemed to be the most relevant.
Overall, I thought that the empirical sections were not very helpful for the debate, aside from perhaps targeting audiences who are the very early stage of the debate and hastily think “I’ll dismiss EMH in general because of X.”
(I am normally a big proponent of EMH-style reasoning; it’s not like I and many other people I know that are part of this EMH+AGI debate are saying “EMH has never worked!”).
One cross-cutting objection off the top of my head is that the people with Special And Justified Knowledge may not be able to profit fast enough to correct the market. [I have read other comments’ responses, and respond to one response in the next point]
This especially applies to two closely related causal mechanisms of the EMH, profit snowballing and dogpiling: “Suppose you have someone who has better insights than everyone else about some asset. They may not be rich and for various related reasons they are unable to immediately correct the market (i.e., the market is actually temporarily inefficient). However, if they are right/superior, they either a) can keep profiting over and over again until they become liquid/rich enough to individually correct the market, and/or b) other people see that this person is profiting over and over again so they jump in and contribute to market correction.”
The problem is that it might be the case that there is only one or two cycles for profit with AGI until the world goes crazy, but it could take many years for this strategy to actually profit, during which time the market will be “temporarily” inefficient. If real interest rates don’t rise for 15 years, and only start to rise ~5 years before AGI, the market is inefficient for 15 years because small players can’t profit to fix the situation.
“But those are just two causal mechanisms,” the authors/defenders hypothetically reply, “and sometimes the market still corrects even without those mechanisms; look at the big short! And there are probably enough AI-conscious investors such that they could alter the market...”
First, I think it’s worthwhile to highlight my view that the debate can unproductively explode at this point because the original authors didn’t (IMO) do a good job of laying out their own causal mechanisms. This forces critics into a game of whack-a-mole filled with delays at the need to comment, wait for responses, address new causal mechanisms, parse out alternate branches of disagreement, etc.
(However, I think the following subpoint addresses a fairly large part of the debate)
Second, I don’t think that the authors did a good job of differentiating between “sudden surprise takeoffs” (e.g., ~1 year of warning time and ~1/3rd of people believe this) vs. “forecastable takeoffs” (e.g., ~10 years of warning time and >1/10th of people believe this). This seems somewhat cruxy in at least one direction—against the authors’ viewpoint. Ultimately, (correct me if I’m wrong) it seems that the authors’ proposed strategy for profit relies on the belief that as you get closer to expected AGI date, more/richer people will start to agree with your predictions (and still see benefits from getting in on profit): otherwise, prescient investors could believe “AI is very likely to occur around year X, but very few people or institutions will recognize this before X-3 years, such that real interest rates probably won’t change much at all until it’s too late, and when they do change
The counterparty/non-payment risk may be high;
I prefer a 50% chance of being moderately wealthy for 10 years to a 50% chance of being really rich for ~2 years before I die (with a 50% chance of being poor for 10 years if I bet big and am wrong);
The world might experience chaos which undermines my ability to spend money on things I value, etc.”
Third, I don’t think the claim that “there are probably enough AI-conscious investors...” is supported in this post, and I’m hesitant on this point. I am willing to budge, and this could be a fairly important point if we are in a “forecastable/slow takeoff” scenario, but I would like to see the post focusing on that leaf of the debate, not trying to recreate the trunk of the debate tree. And again, if we are in a “sudden short timeline” scenario, I suspect that this possibility doesn’t matter all that much.
Sure, some people may hold this view, but a) I’m skeptical you’ll convince them with this article, and b) you can’t just focus on empirics and then declare victory when there are still many critics who have objections you haven’t directly addressed.
A few years ago I asked around among finance and finance-adjacent friends about whether the interest rates on 30 or 50 year government bonds had implications about what the market or its participants believed regarding xrisk or transformative AI, but eventually became convinced that they do not.
As far as I can tell nobody is even particularly trying to predict 30+ years out. My impression is:
A typical marginal 30-year bond investor is betting that interest rates will be even lower in 5-10 years, and then they can sell their 30 year bond for a profit since it will have a higher locked-in interest rate than anything being issued then.
Lots of market actors have a regulatory obligation (e.g. bank capital requirements) to buy government bonds which drives the interest rate on such bonds down a lot, to the point that it can be significantly negative for long periods even when the market generally expects the economy to grow. Corporate bonds have less of this issue but are almost never issued for such long durations.
It’s true that the market clearly doesn’t believe in extremely short timelines (like real GDP either doubling or going to zero in the next 5-10 years). But I think it mostly doesn’t have beliefs about 30+ years out, or if it does their impacts on prices are swamped by its beliefs about nearer-term stuff.
I am confused by some of the logic in this post.
If TAI arrives soon, either I’ll be dead (so I should borrow and spend all my money now—this part makes sense to me) or I’ll be fantastically rich in a post-TAI utopia, so you say I should borrow and spend all my money now, to smooth out my consumption. Apparently this is the consensus of all mainstream economics and also the results of common-sense.
But you also say: if TAI arrives soon, that means real interest rates should be higher, so I should engage in a risky investment strategy of shorting the bond market. This strategy will make me poorer in the short-term, but will pay off by making me richer later, once markets realize the consequences of TAI.
That second idea seems like the opposite of consumption smoothing?? Maybe it’s worthwhile because I would become rich enough that the extra volatility is worth it to me? But what’s the point of being rich for just a short time before I die, or of being rich for just a short time before TAI-induced utopia makes everyone fantastically rich anyways?
I also do not find it plausible that a vision of impending TAI-induced utopia, an acceleration of technological progress and human flourishing even more significant than the industrial revolution, would… send stock prices plummeting? I am not an economist, but if someone told me that humanity’s future was totally assured, I feel like my discount rate would go down rather than up, and I would care about the future much more rather than less, and I would consequently want to invest as much as possible now, so that I could have more influence over the long and wondrous future ahead of me. You could make an analogy here between stable utopia and assured personal longevity (perhaps to hundreds of years of age), similar to the one you make between human extinction and personal death. The promise of a stable utopian future (or personal longevity) seems like it should lead to the opposite of the short-term behavior seen in the near-term-extinction (or personal death) scenario. But your post says that these two futures end up in the same place as far as the discount rate is concerned?
To quote a Peter Thiel joke that didn’t make it into my post about investing under anthropic shadow, “Certainly if we could just live to all be 100, that would be quite a transformation. There is good news and bad news. The bad news is: If you don’t believe in the good news, you’re not saving enough for retirement.”
Personally, I’d expect some actors to be really greedy. In the upside scenarios, they’d want to be 100x fantastically rich, not just 2x.
There haven’t been many historic examples where the wealthy / governments didn’t try to take advantage of big opportunities just because things were so good. Instead, when new resources opened up, some would rush to take as big a share as possible.
The way to resolve the apparent contradiction is to return to the logic of consumption smoothing.
Let’s say you believe that TAI is coming. You expect that you and everyone you know will be dead or fabulously rich in the medium-term. You think that this has implications for markets. You like consumption streams to be smooth.
You (hopefully) start today with savings.
The logic of consumption smoothing says that you want to save less/dissave more relative to what would have been the case if you did not believe TAI is coming. In the first instance this means spending down your assets. But no borrowing! There’s no point in (expensive) borrowing if you can still spend down your assets. It also means that you still have assets! Where should you put those assets? Well, your beliefs imply that real interest rates will be higher than the market expects, so you don’t want to hold inflation-protected treasuries.
The point about risk is neither here nor there. If this is a concern you can decrease the degree of underweighting (or the size of your short position). The point is that at the margin you disprefer inflation-protected treasuries to what would have been the case if you did not have your TAI beliefs.
At some future point, you have spent down your savings.
But, no worries, TAI is coming! You commit to take out loans. You don’t have any more savings, so there is no point in investing, so there is no point bothering to underweight inflation-protected treasuries. (Unless leverage, but logic would be similar.)
Id suggest investing in companies correlated with AI progress not shorting bonds. Some of the best investments are private but you can still buy stocks like MSFT, TSM, Samsung and ASML. This seems like a much better way to bet on AI progress.
Also if you expect aligned AI you should probably consume much less? Inequality might get locked in and returns to capital might skyrocket. Bad time to be a big spender. If we get unaligned AI and the world ends or aligned AI paradise so be it.
The authors address your first comment in this appendix.
I read the appendix and it doesn’t seem very convincing. For example they bring up openAI but you can buy MSFT stock. MSFT already owns a chunk of openAI and is in talks to own a much larger share.
I do not think the appendix 1 is likely to convince people shorting interest rates is the best way to express an AI thesis.
If you don’t like the OpenAI example, consider the possibility that other non-public companies could develop AGI...!
I did say some of the best investments are private. But there are good public investments (MSFT, TSM, SMSN, ASML). Nothing in investing is guaranteed but trying to invest in AI companies seems like a much better bet than shorting interest rates. Also many rationalists are rich enough they can try to invest in various private companies.
It might be more convincing to directly attack their point that the price of MST, TSM, SMSN, ASML, etc. is a function of not only future profits but future interest rates.
Their claim is that the effect on equity prices is messy because of interest rates, not that future expected profits are necessarily lower than you believe.
Aren’t there good reasons not to invest in AI capabilities, like reducing P(doom)?
I am surprised that critical commenters have focused on the irrationality or inadequacy of financial markets, rather than what feels like the more obvious point:
Unaligned AI need not imply extinction, and aligned AI need not imply 30% growth. Financial markets can be inconsistent with these implications without being inconsistent with Big Deal AI.
On unaligned AI: eyeball the reviews of the Carlsmith report. Looks like average P(xrisk | misalignment) ~= 45% among reviewers.
On aligned AI: 30% growth is crazy high! The authors are unwilling to make their claims for less-crazy growth figures:
This is a valuable point, but I do think that giving real weight to a world where we have neither extinction nor 30% growth would still be an update to important views about superhuman AI. It seems like evidence against the Most Important Century thesis, for example.
An update, yeh, but how important?
I think Most Important Century still goes through if you replace extinction/TAI with “bigdealness”. In fact, bigdealness takes up considerably more space for me.
To the degree that non-extinction/TAI-bigdealness decreases the magnitude of implications for financial markets in particular, it is more consistent with the current state of financial markets.
Well I think MIC relies on some sort of discontinuity this century, and when we start getting into the range of precedented growth rates, the discontinuity looks less likely.
But we might not be disagreeing much here. It seems like a plausibly important update, but I’m not sure how large.
[Edit: this is no longer applicable, sheesh stop downvoting]
Your tweets appear to be set to private (thus impacting the accessibility of the last link).
Ah, thank you for mentioning; corrected in original comment.
Great post! This is a fascinating argument and makes me think quite a bit.
A couple of tests pitting EMH against mortality risk since nuclear war would have similar ramifications (with a similar amount of uncertainty given that it occurs!)
What happened to bond prices during the Cuban missile crisis? How complete did the public think nuclear war would annihilate society? Was nuclear winter a thing? You can make the argument that the public thought P(death) per year rose from ~2% to 20%-50%? What would this theory suggest happen to bond prices?
What about following the collapse of Soviet Union? After settling down, this arguably lowered the probability of nuclear war by 1%/year. That gives us (1- pbinom(0, 30, .01)) = 26% reduction of existential risk over 30 years. Should still be measurable in long-term treasury rates.
The appendix 3 has a review of some related papers, FWIW!
Seems relevant (link) :
(agree/disagree with this comment to agree/disagree with the tweets below)
Edit: Yudkowsky commented on the post, consider replying to him directly
Huh? Terminal cancer victims taking out 30-year mortgages is extremely different, in terms of the counter-party’s willingness to trade.
Narrow the thought experiment to “cancer that banks aren’t able to find out about” and the thought experiment goes through fine. And US institutions are strongly supportive of secrecy, in general, so I think this is actually the typical case (at least for people who are young enough that seeking a large loan is not itself suspicious).
That does not get the thought experiment through.
Mortgage rates for older people are higher. And if mortgage holders die, the mortgage must still be paid by the executor of an estate, which is a disincentive for anyone with a bequest motive.
I’m sure that we can find some corner case where young cancer victims with no friends/family or no regard for their friends/family act otherwise. But this hardly seems important for the point that you—yes, you—can make money by implementing the trades suggested in this piece. Which is the claim that Yudkowsky is using the cancer victim analogy to argue against.
Pasting some of my replies to this from twitter FWIW:
That’s just not correct, unless I’m misunderstanding—
if you short rates, and next day the market decides you are right, then real rates spike and you make money. Simple as that
So I don’t follow your claim ¯\_(ツ)_/¯
Sovereign debt markets are the some of the most well-functioning financial markets ever created by man—this is literal orders of magnitude off. This is just not tether
I think the claim is that with fast takeoff, the market will either never decide that you are right (we die before the market realizes), or will decide you are right and you get rich but have only a short time to live, so there’s no value to being rich.
I want to suggest a bunch of caution against shorting bonds (or tips).
The 30yr yield is 3.5%, so you make −3.5% per year from that.
You earn the cash rate on the capital freed up from the shorts, which is 3.8% in interactive brokers.
If you’re right that the real interest rate will rise 2% over 20 years, and average duration is 20 years, then you make +40% over 20 years – roughly 2% per year.
If you buy an ETF, maybe you lose 0.4% in fees.
So you end up with a +1.9% expected return per year.
This would have a third of the volatility of stocks, so you could leverage it several times, but then you’d need to pay the margin cost of ~4%.
So it doesn’t seem like an amazing trade in terms of expected returns (if I’ve estimated this correctly.
It gets worse if you consider correlations – if we go into a recession, yields might fall 1-2%, which would mean you lose 20-40%, and you make those losses at the worst possible time – when everything else is going down.
In addition, a neutral portfolio is something like 50% equity, 20% real assets and 30% bonds, so that should be our prior, and then you’d want to make a bayesian update away from there based on your inside view.
In effect, in your portfolio optimizer, you could set the expected returns of long bonds to be say 1.5% rather than 3.5%. My guess is that would spit out having say 0-10% bonds rather than 30%, but not actively shorting them.
Tldr my guess is that most investors (if they believe the thesis) should just underweight bonds rather than actively short them.
I’d be very keen to hear more comments on this.
I think the calculation you’ve done here is −3.5% + 3.8% + 2% − 0.4%
This doesn’t quite make sense. The first rate you are talking about is the yield on a 30y bond. The second rate (should be) the overnight repo. What you should actually look at is the average overnight repo over 30y. The 30y SOFR swap is ~2.9% which would be a more relevant comparison to your 30y.
A simpler way to think about all of this would be to have some number for losses on fees (“shorting fees” ie your repo costs + ETF fees if you execute via an ETF) and some number for return from being right (change in real rates * duration).
I would agree (roughly) with your calculations if this happens gradually over 20y. If the market is about to realise this overnight, then you wil make 40% overnight. This is what they are advocating for. (Maybe not overnight, but over a shorter time horizon than you are implying).
(Either way I agree with you that shorting bonds is a terrible strategy to implement just based on this post)
Thanks that makes sense.
So if you implemented this with a future, you’d end up with −3.5% + 2.9% + rerating return = −0.6% + rerating.
With a 2% p.a. re-rating return over 20 years, the expected return is +1.4%, minus any fees & trade management costs.
If it happens over only 5 years, then +7.4%.
I’m really confused where any of those numbers have come from for using futures? (But yes, the expected return with low leverage is not spectacular for 2% move in rates).
I’m an economist who’s been thinking about some related issues.
I agree with the article that increases in anticipated capital productivity (or decreases in anticipated longevity) should tend to increase global interest rates. However, the blog post ignores a giant secular trend that could be driving recent (last few decade) low interest rates – a global “savings glut” due to the growth of a giant, thrifty, Asian upper and middle class. This group has a huge hunger for investment vehicles, and has been driving down global interest rates.
In my recent global automation simulation paper, which models in detail demographic and productivity trends across the regions of the world, we find that interest rates will continue to decline over the next thirty years if automation (defined as capital-biased technical change) continues at its historical rate. If automation were to proceed at a rate 5x its historical rate, we calculate that world interest rates is ~still~ projected to decline, and be lower in 2050 than today. That said, if automation were to occur x10 faster than its historical rate moving forward, we would anticipate interest rates to be almost 50% higher than they are today in 2050.
In the language of the blog post, I’d argue that *rho*, the time discount rate of the representative global saver, is in essence getting smaller as more of world income is going to thrifty middle-aged Asians. A fast enough rate of automation can overcome this headwind, but there is a large headwind to be overcome.
Paper here: https://www.nber.org/papers/w29220 see section “4.3.3 Faster Rates of Automation”
I haven’t run the simulation to see what the effect would be today of a giant anticipated increase in automation in 30 years, or of everyone dying with certainty in 30 years, but those are certainly possible in our framework, and something we could try if anyone thinks it would be interesting.
Thanks Seth, we’ll read your paper carefully. I’ll just highlight that really the purpose of the analysis above is to engage specifically with the extreme scenario you mention at the end
Also note we briefly allude to demographic trends, but in the (blog post!) analysis here, we want to ignore them because they seem plausibly swamped by the huge growth/mortality scenarios under consideration. As a quick BOTEC:
We use ρ=0.01
If—as you suggest—we model demographic trends in reduced form as a decrease in rho, then
At most the demographic effects could shrink our estimates of the increase in the real rate by one percentage point. Of course, that’s just in this simple rep agent model (TIABPNAJA; econometrica isn’t going to accept this!)
(PS: some of your other papers which I’ve already read before, I’ve found useful to read!)
Hi Basil, thanks so much for this gracious response. I don’t quite buy this BOTEC though—I don’t see any theoretical reason the abstract representative agent couldn’t have a negative time preference rate. Certainly, at the individual level, people might prefer to consume during their retirement or to build up savings for high anticipated taxes/costs when they’re old. There is no mathematical problem with an individual having a negative time preference rate (e.g. Utility = log(C_young) + 2*log(C_old)). So I don’t see why rho = 0 needs to be a lower bound.
Thanks for your gracious words about my other work, and looking forward to your thoughts on the paper.
Unless I’m missing something, an infinitely lived agent (the framework at play here) can’t have a negative time preference without violating the transversality condition, saving in every period and never consuming. An overlapping generations approach could yield something totally different, though.
Perhaps just a technicality, but: to satisfy the transversality condition, an infinitely lived agent has to have a discount rate of at least r (1-σ). So if σ >1—i.e. if the utility function is more concave than log—then the time preference rate can be at least a bit negative.
Yes, I was referring to finite lived agents, which does start to get away from what the representative agent framework can handle.
To your technical point—if the real interest rate were negative, couldn’t an infinitely lived agent be able to still satisfy transversality? If there’s no productive use of additional capital at the margin, and a shitty storage technology, that would be the case. And, endogenously, a super-saving society that only wanted to throw a party at infinity might start running into that problem fast.
Thank you for the post! I’m very interested to see more work on this topic.
I feel a little bit unsure about the focus on the bonds – would be very curious to hear any reflections on the below.
As you say, if real interest rates rise, that should affect all assets with positive duration.
Perhaps then the net effect of having the view that real interest rates will rise is just that you should reduce overall portfolio duration. A 60:40 portfolio has an effective duration of ~40 years, where most of that duration comes from equities. Perhaps someone who believes this should target, say, a 20 year average duration instead (through whatever means seems least costly, which could mean holding fewer equities).
Perhaps equivalently, if real interest rates are going to rise, then all financial assets are currently overpriced, so maybe the effect would be holding fewer financial assets in general, and holding more cash / spending more.
My understanding is that an important part of the reasoning for a focus on avoiding bonds is that an increase in GDP growth driven by AI is clearly negative for bonds, but has an ambiguous effect on equities (plus commodities and real estate), so overall you should hold more equities (/growth assets) and less bonds. Is that right?
That makes sense to me, but then I still feel unsure about, having tilted towards equities, whether your overall exposure should be higher or lower.
(And tilting towards equities will increase the effective duration of your portfolio, making an increase in real interest rates worse for you all else equal.)
If we use the merton’s share to estimate optimal exposure, that depends on the difference between the expected return of the asset and the expected real interest rate over your horizon. Perhaps with equities you might expect both returns and the interest rate to rise by 3%, which would cancel out, and you end up with the same exposure. But with bonds only the interest rate will rise, so you end up with much lower exposure (potentially negative exposure if your expected interest rate is higher than the expected returns). Is that basically the reasoning?
Thanks for these comments. In short, to all of your questions, the answer is “yes”. Some specific comments:
1. This is perhaps already clear, but it might be worth emphasizing that the economic logic is: real rates are particularly use for forecasting, since the sign of the effect is rather unambiguous for the TAI scenario; but it’s possible the expected returns could be higher for trading on other bets, if you’re willing to make stronger assumptions (e.g. “compute will be important”).
2. Re: equities, the appendix post (especially #4 there) summarizes how we’re thinking about this. To spell out a bit more:
An approximation for stock pricing is the Gordon growth formula, P=D/(r−g), where
P is stock price (i.e. market cap)
D is some initial level of dividends
r is the real rate
g is the growth rate of dividends over time
For the equity market as a whole, a natural approximation is that the growth rate of dividends equals the growth rate of the economy. And as we pointed out in section I, a first-order approximation for the Euler equation under certainty (“the Ramsey rule”) is
Combining the Ramsey rule and the Gordon growth formula, we have
How to interpret this? As a benchmark, suppose theta=1. That’s log utility (which I think is the benchmark used in a lot of EA, e.g. at OpenPhil, and has some support in the literature). Then you have P=D/rho. That is, price is future profits discounted by your rate of time preference—raising or lower the growth rate doesn’t affect the stock price at all, because it ‘cancels out’ in a specific way.
So, that denominator is picking up the ‘Merton optimality’ that you mention. And I guess the reason I wrote all of this out was to reply to this:
Yes! But also they might not cancel out. It could go either way depending on theta ¯\_(ツ)_/¯. To my knowledge it’s an active area of debate (‘financial economists think theta < 1, macroeconomists think > 1’).
If you really want to nerd out, Cochrane has extended wordy discussion here and Steinsson has long slides here (theta is the inverse of the elasticity of intertemporal substitution).
This is perhaps more than you asked for, and yet I’m not sure if this answered exactly what you were asking. Let me know if not!
Sorry for making you repeat yourself, I’d read the appendix and the Cochrane post :)
To summarise, the effect on equities seems ambiguous to you, but it’s clearly negative on bonds, so investors would likely tilt towards equities.
In addition, the sharpe ratio of the optimal portfolio is decreased (since one of the main asset classes is worse), while the expected risk-free rate over your horizon is increased, so that would also imply taking less total exposure to risk assets.
What do you think of that implication?
One additional piece of caution is that within investing, I’m pretty sure the normal assumption is that growth shocks are good for equities e.g. you can see the Chapter in Expected Returns by Anti Ilmanen on the growth factor, or read about risk parity. There have been attempts to correlate the returns of different assets to changes in growth expectations.
On the other hand, I would guess theta is above one for the average investor.
“Negative for bonds” does not imply “shift investment from bonds to stocks”, though. It could mean “shift toward short bonds” or “shift investment from bonds, to just invest less overall”.
I would push back on this too, for a related reason—the optimal portfolio can include “go short bonds”, which might now have a higher expected return.
I think the standard asset pricing logic would be: there is one optimal portfolio, and you want to lever that up or down depending on your risk tolerance and how risky that portfolio is. So, whether you ‘take less total exposure to risky assets’ depends on whether the argument here updates your view on how ‘risky’ the future is (Tyler Cowen has argued this, I’m not sure it’s super clear cut though).
That makes sense. It just means you should decrease your exposure to bonds, and not necc buy more equities.
I’m skeptical you’d end up with a big bond short though—due to my other comment. (Unless you think timelines are significantly shorter or the market will re-rate very soon.)
In the merton’s share, your exposure depends on (i) expected returns of the optimal portfolio (ii) volatility / risk (iii) the risk free rate over your investment horizon and (iv) your risk aversion.
You’re arguing the risk free rate will be higher, which reduces exposure.
It seems like the possibility of an AI boom will also increase future volatility, also reducing exposure.
Then finally there’s the question of expected returns of the optimal portfolio, which you seem to think is ambiguous.
So it seems like the expected effect would be to reduce exposure.
I’m not really sure how you get that? The duration on the bond portion is going to be ~7-10y which would imply 60y duration for equities, which I think is wrong.
That is their claim, but as I pointed out here the evidence isn’t so clear.
I think the effective duration on equities is roughly the inverse of the dividend yield + net buybacks, so with a ~2% yield, that’s ~50 years.
Some more here: https://www.hussmanfunds.com/wmc/wmc040223.htm
I don’t think that makes much sense tbh.
I think the key point is just equities will also go down if real interest rates rise (all else equal) and plausibly by more than a 20 year bond.
I agree, although I’ll give you good odds the 20y moves more.
Regarding the second point about how EAs (or anyone else) might exploit an inefficiency in this space, I think it’s tricky just because the amount of other risks that inform the pricing of long-dated bonds. Many of these (climate, demographics, geopolitics, populism etc..) could wipe out any short (or especially leveraged short) position before TAI is realised.
As noted in my other comment I expect for someone with high-conviction views on short TAI timelines there are bets that are:
Much higher in expected returns
Less capital intensive
Less susceptible to other risks
Examples of these bets are broadly discussed elsewhere but often are related to long/short equity bets on disrupting/disrupted companies and companies part of the supply chain (semiconductors design/fab/tooling, datacentre, data aggregators, communications etc..)
I think perhaps at best short long-dated bonds could form part of a short-timelines TAI bet in order to hedge against long positions elsewhere/maintain neutrality against other factors rather than the core position. It feels likely there are considerably better options for someone taking such a bet (as you allude to in the opportunities for future work)
Ignore the short position: you could just underweight these assets relative to global market portfolio.
By the way, someone wrote this Google doc in 2019 on “Stock Market prediction of transformative technology”. I haven’t taken a look at it in years, and neither has the author, so understandably enough, they’re asking to remain nameless to avoid possible embarrassment. But hopefully it’s at least somewhat relevant, in case anyone’s interested.
(Nice, thanks for sharing)
Thanks for adding comments to it!
This is one of my favourite forum posts ever. A pleasure to read. Congrats to the three of you.
+1. I found it to be an extremely thought provoking, informative, and high-quality post. Really well done. [FWIW: I had very weak priors over AGI timelines (I’m too confused to form a coherent inside view) and this seems like a much more reliable outside view than I was defaulting to].
Whenever I see charts like this in a financial context I twitch. We have 30 years of data for UK real rates, less for other issuers. There are ~2 non-overlapping UK data points on your second chart where I can count at least 15(?) data points?
I think the fundamental assumption that aligned AGI would cause dramatic economic growth are simply wrong. Like super duper wrong.
It’s important to differentiate between AGI and both SuperIntelligence and God. Most EAs are thinking about an omnipotent and omniscient being.. not AGI.
Is being a high IQ person in Russia or India that useful? Why then do their smart people migrate to the US? Because the US can use that intelligence (Sundar Pichai) while at home they may become a mid level bureaucrat (Sundar Pichai’s dad).
Say you create an AGI, comes out and tells you, “I’ve solved fusion, let’s build plants” … you go “Cool bro, need He3? It’s on the moon, costs too much to go there, figure out how to get there cheaper first”
So there are physical, logistical, real limits to what can be achieved in the physical world with lots of intelligence. Also economic. Because there won’t just be one AGI, there are likely to be multiple, at the very least latency means one on Earth and one on Mars, and I suspect latency will dictate multiple on Earth.
So we are likely to find other bottlenecks besides intelligence alone limit economic growth, and that these will have to be figured out. And also AGI again not being omniscient, will take time to figure things out. Like tell it to solve aging and it comes back and asks for 100 years of compute time.. and it’s just not feasible.
This is what weirds me out about Yud and other EAs.. it’s clearly a religious belief that we are creating an omnipotent being.. rather than a perfectly ordinary intelligent creature that is still limited by the availability of data, compute, data storage, network latency etc.
Regarding the first point about the extent to which we should update timelines based on the fact the bond market is not pricing in short timelines for TAI; my prior is that in general the fixed income (bonds) markets are fairly efficient and are more sophisticated/efficient than equity markets. This leads me to initially believe we likely should update based on this/consider it more strongly than bullish equity sentiment towards some AI themes.
However on the flipside I think the size of this market does mean it can retain inefficiencies around subtle themes for longer. I think of this as a form of Expecting Short Inferential Distances—there are a lot of inferential reasoning steps around TAI, scaling, take-off etc.. which make it slower for conviction to spread when compared to something like demographic shifts which have a much more straightforward causality. This is relevant because to move government bond markets requires people to take this bet with a huge amount of assets as this is a very capital-intensive trade with a lot of exposure to other uncorrelated risks/conflating variable (climate, demographics, geopolitics, populism etc..). The reason I think it may be unlikely that many people are making this bet is related to this:
I suspect there are far more highly levered bets that market participants with a high-conviction belief in short TAI timelines could take, potentially diluting the impact on lower beta instruments (like bonds). For example I expect even being long fairly broad equity markets might outperform this bet and much more targeted bets (especially if they could be hedged against other risks, bringing them closer to a ‘pure’ TAI bet) could be expected to return many multiples of the short-US30Y trade.
If the amount of money being managed by those with high conviction TAI views is ‘small’ (<<$100bn) then I expect there are many more favourable inefficiencies/price dislocations for them to exploit and not a sufficient mass of ‘smart TAI’ money to spill over into long-dated bonds.
Here’s another way of putting things, that I’ll post here for reference:
Suppose I think Google is undervalued, because it is going to have a $1T dividend in 2030, and the market doesn’t realize this.
1. I buy Google today at some cheap price.
2. Possibility 1: before 2030, the market “corrects” and realizes that it was undervaluing Google. The stock price rises, and I receive capital gains.
3. Possibility 2: the market does not “correct” before 2030. I still get the big dividend in 2030, and was able to get it for a cheap price in 2023.
The above seems exactly analogous to the case with existential risk.
Suppose I think bonds are overvalued, because in 2030 the world is going to blow up.
1. I short real rates today.
2. Possibility 1: before 2030, the market “corrects” and realizes that it was overvaluing bonds. Rates rise, and I receive capital gains.
3. Possibility 2: the market does not “correct” before 2030. I still was able to take out a cheap loan in 2023 (i.e. by selling short bonds), and don’t have to pay it off in 2030 when the world ends.
You’re missing the ways in which money could mean something else entirely in an aligned AGI future:
we could get a stable world dictatorship (stable for upto billions of years). They could abolish money, markets, and most positions of power that exist today if they wanted to, for any reason whatsoever (including stupid ones). Even if money exists it may weild less power in the new world order, with other sources of power being more important.
we may no longer need money and markets because AGI can solve for extrapolated volitions at the level of individuals, and all aspects of the socialist calculation problem (how to aggregate information about supply and demand, how to accurately control supply and demand without a market).
your preferences might diverge from what future markets supply. Suppose the future is an em arms race, with markets focussing primarily on supplying chips to ems to becomes ever smarter. You might not get ordinary goods like food, water, a human-habitable house and so on for sale, even if you have money. If you don’t want to become an em and participate in these markets (to participate in the arms race), you may prefer dying instead.
I wanted to mention this because I wonder if there’s a taboo around talking about how much more radically authoritarian our default AGI future could be.
I think this post contains many errors/issues (especially for a post with >300 karma). Many have been pointed out by others, but I think at least several still remain unmentioned. I only have time/motivation to point out one (chosen for being relatively easy to show concisely):
Levered ETFs exhibit path dependency, or “volatility drag”, because they reset their leverage daily, which means you can’t calculate the return without knowing what the interest rate does in between the 3% rise. TTT’s website acknowledges this with a very prominent disclaimer:
You can also compare 1 and 2 and note that from Jan 1, 2019 to Jan 1, 2023, the 20-year treasury rate went up ~1%, but TTT is down ~20% instead of up (ETA: and has paid negligible dividends).
A related point: The US stock market has averaged 10% annual returns over a century. If your style of reasoning worked, we should instead buy a 3x levered S&P 500 ETF, get 30% return per year, compounding to 1278% return over a decade, handily beating out 162%.
For what it’s worth, volatility decay will tend to enhance returns in a bull market for the same reason it exacerbates losses in a bear or sideways market.
This means in a rising rates scenario I would actually expect an inverse leveraged ETF to do better than a margin account that shorts treasuries with the same leverage. This actually just happened in 2022.
In 2022, TLT the 1x long term treasuries ETF lost 31%, TMF the 3x long term treasuries ETF lost ‘only’ 73%, while TTT the 3x short long term treasuries ETF gained 150%.
TLT – Performance – iShares 20+ Year Treasury Bond ETF | Morningstar
TMF – Performance – Direxion Daily 20+ Yr Trsy Bull 3X ETF | Morningstar
TTT – Portfolio – ProShares UltraPro Short 20+ Year Trs | Morningstar
This is a counterexample to the numbers Wei Dai posted, to show that volatility decay is not necessarily always harmful.
1) The article recommends financial instruments that are extremely volatile, as the percentage gains and losses I posted above indicate.
2) Long term treasuries have ~5% gains/year historically, so shorting long term treasuries under normal circumstances means you will keep losing 5% every year (or 15% if you are 3x).
As I said earlier, volatility decay on its own is not the worst thing in the world if you have positive expected returns. But if you combine volatility decay with extremely high volatility and historical negative returns, I do believe that would make it a risky combination. I ultimately agree with Wei_Dai.
2022 and the 1970s showed that inflation or nominal GDP growth can wreak havoc on asset values.
The discussion here is on real rates though, not nominal rates. Do we examples of rates rising in a low inflation environment? Yes! I came across a couple while browsing another forum a while ago. I will copy the relevant post below with some edits. I will not link the post from the other forum as I am a new poster on the EA forum and I don’t want to be flagged for spamming. You will have to assume the dates are cherry picked.
From Dec 2015 to Dec 2018, fed increased int from 0-.25 to 2.25-2.5 (a 10x increase!) and [55% S&P 500 and 45% long term treasuries] [nearly matched 100% S&P 500]. Edit: [A 55% S&P 500 and 45% inverse long term treasuries does worse]:
https://www.portfoliovisualizer.com/bac … tion2_1=45
From Jun 2004 to Jun 2006, fed increased from 1.25% to 5.25% and [55% S&P 500 and 45% long term treasuries] also [nearly matched 100% S&P 500]. Edit: [A 55% S&P 500 and 45% inverse long term treasuries does worse]:
https://www.portfoliovisualizer.com/bac … n10_1=-200
Finally, I must express my appreciation for the valuable insights presented in this piece. The authors’ diligent research and thoughtful analysis truly made an impact on my perspective. I am now more wary of assets that are sensitive to interest rates than the average investor.
The entire section is based on a first-order approximation, as explicitly noted in the post (which is also why we set aside e.g. the important issue of convexity). This point is of course correct!
This calculation, like that of many other commenters, estimates the total return. What matters is risk-adjusted return (a la Sharpe ratio). If you think the market is literally wrong with certainty, then the bet could be literally risk-free (“infinite Sharpe”, speaking loosely). If you aren’t 100% certain, then you have a finite risk-adjusted return, but still high—how high depends on your confidence level (etc).
Equities, on the other hand, have risk!
We welcome other criticisms to discuss, but comments like your first line are not helpful!
The point of my comment was that even if you’re 100% sure about the eventual interest rate move (which of course nobody can be), you still have major risk from path dependency (as shown by the concrete example). You haven’t even given a back-of-the-envelope calculation for the risk-adjusted return, and the “first-order approximation” you did give (which both uses leverage and ignores all risk) may be arbitrarily misleading, even for the purpose of “gives an idea of how large the possibilities are”. (Because if you apply enough leverage and ignore risk, there’s no limit to how large the possibilities are of any given trade.)
I thought about not writing that sentence, but figured that other readers can benefit from knowing my overall evaluation of the post (especially given that many others have upvoted it and/or written comments indicating overall approval). Would be interested to know if you still think I should not have said it, or should have said it in a different way.
What effect do you think an AI boom would have on inflation?
It seems like it would be deflationary, since it would drive down the cost of goods and labour, though it might cause inflation in finite resources like commodities and land, so perhaps the net effect could go either way?
(I partly ask because a common framework in investing for thinking about the what drives asset prices is to break it into growth shocks, inflation shocks, changes in investor risk appetite and changes in interest rate policy. If AI will cause a growth shock and deflation shock, then normally that would be seen as positive for equities, ambiguous for real assets and nominal bonds, and negative for TIPs.)
At what time horizon? For anything over a year, I’d default to the quantity theory of money: inflation should roughly equal the rate of money supply growth (i.e., a central bank choice) minus real rates of economic growth. Increasing the money supply at 30% per year is easy, so if the Fed wanted to avoid deflation it seems like it could. The short-run during such a dramatic regime change could become whacky.
Thanks for writing this! Great to see this written out finally.
Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.
This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.
Yes, to emphasize, the post is meant to define the situation under consideration as: “something close to a 10x increase in growth; or death”. We’re interested in this scenario only because it’s the modal scenario in the particular world of LW/EA/AI safety.
The logic of the argument does not apply as forcefully to “smaller” changes (which could potentially still be quite large), and would not apply at all if AI did not increase growth (ie did not decrease marginal utility of consumption)!
One possible explanation is an expectation of massive deflation (perhaps due to AI-caused decreases in production costs) which the structure of Treasury Inflation Protected Securities (TIPS) and other inflation-linked government bonds — the source of your real interest rate data — doesn’t account for.
While TIPS adjust the principal (and corresponding coupons) up and down over time according to changes in the consumer price index, you ALWAYS get at least the initial principal back at maturity. Typical “yield” calculations, however, are based on the assumption that you get your inflation-adjusted principal back (which you do if inflation was positive over its term, as it usually would be historically).
This means that iff there’s net deflation over its term, the “yield” underestimates your real rate of return with TIPS by the amount of that deflation.
1. Very interesting, thanks, I think this is the first or second most interesting comment we’ve gotten.
2. I see that you are suggesting this as a possibility, rather than a likelihood, but I’ll note at least for other readers that—I would bet against this occurring, given central banks’ somewhat successful record at maintaining stable inflation and desire to avoid deflation. But it’s possible!
3. Also, I don’t know if inflation-linked bonds in the other countries we sample—UK/Canada/Australia—have the deflation floor. Maybe they avoid this issue.
4. Long-term inflation swaps (or better yet, options) could test this hypothesis! i.e. by showing the market’s expectation of future inflation (or the full [risk-neutral] distribution, with options).
(duplicating from LW)
It appears the UK’s index-linked gilts, at least, don’t have this structural issue.
See “redemption payments” on page 6 of this document, or put in a sufficiently large negative inflation assumption here.
From an altruistic point of view, your money can probably do a lot more good in worlds with longer timelines. During an explosive growth period humanity will be so rich that they will likely be fine without our help, whereas if there’s a long AI winter there will be a lot of people who still need bednets, protection from biological xrisks, and other philanthropic support. Furthermore in the long-timeline worlds there’s a much better chance that your money can actually make a difference in solving AI alignment before AGI is eventually developed. So if anything I think the appropriate altruistic investment approach is the opposite of what this post suggests; even if you think that timelines will be short you should bet that they will be long.
From a personal point of view, it’s likewise true that marginal dollars are much more useful to you during an AI winter than during an explosive growth period (when everyone will be pretty rich anyway), so you should make trades that move money from short-timeline futures to long-timeline ones. But I do agree with the post that short timelines should increase your propensity to consume today. (The “borrow today” proposal is impractical since nobody will actually lend you significant amounts of money unsecured, but you might want to spend down savings faster than you otherwise would.)
Borrowing money if short timelines seems reasonable but, as others have said, I’m not at all convinced that betting on long-term interest rates is the right move. In part for this reason, I don’t think we should read financial markets as asserting much at all about AI timelines. A couple of more specific points:
(a) The trade you’re suggesting could take decades to pay off, and in the meantime might incur significant drawdown. It’s not at all clear that this would be a prudent use of capital for ‘sharp money’.
(b) Even if we suppose that sharps want to bet on this, that bet would be a fraction of their capital, which in turn is a fraction of the total capital in financial markets. If all of the world’s financial assets are mispriced, as you say, why should we expect this to make a dent?
Setting aside that the examples given are inapposite, surely there are plenty in both directions? To pick just one notable counterexample: The S&P 500 broke new all-time highs in mid-Feb 2020, only to crash 32% the following month, then rise 70% over the following year. So markets did a very poor job of forecasting COVID, as well as the subsequent response, on a time horizon of just a few months!
Both of these were in rapid response to recent major events (albeit ahead of common wisdom), as opposed to an abstract prediction years in the future
this is delightful. Such a good post . Great stuff, fellas.
Very nice post! I’m not sure if you have looked into this but market aside, given that people in EA believe in claims about AI risk and short timelines, are charities in EA spending money in proportion to the seriousness the EA community seems to take short AI timelines and AI x-risk? For example you cited some reports from Open Philanthropy like Bio Anchors where you extracted some of the probabilities used in your calculation. Do you think Open Phil’s spending is in line with the expected timelines suggested by Bio Anchors?
A high real rate doesn’t necessarily imply a high nominal rate, it could also come with huge deflation, in which case shorting government debt won’t get you anywhere.