I disagree with the idea that short AI timelines are not investable (although I agree interest rates are a bad and lagging indicator vs AI stocks). People foreseeing increased expectations of AI sales as a result of scaling laws, shortish AI timelines, and the eventual magnitude of success have already made a lot of money investing in Nvidia, DeepMind and OpenAI. Incremental progress increases those expectations, and they can increase even in worlds where AGI winds up killing or expropriating all investors so long as there is some expectation of enough investors thinking ownership will continue to matter. In practice I know lots of investors expecting near term TAI who are betting on it (in AI stocks, not interest rates, because the returns are better). They also are more attracted to cheap 30 year mortgages and similar sources of mild cheap leverage. They put weight on worlds where society is not completely overturned and property rights matter after AGI, as well as during an AGI transition (e.g. consider that a coalition of governments wanting to build AGI is more likely to succeed earlier and more safely with more compute and talent available to it, so has reason to make credible promises of those who provide such resources actually being compensated for doing so post-AGI, or the philanthropic value of being able to donate such resources).
And at the object level from reading statements from investors and talking to them, investors weighted by trading in AI stocks (and overwhelmingly for the far larger bond market setting interest rates) largely don’t have short AI timelines (confident enough to be willing to invest on) or expect explosive growth in AI capabilities. There are investors like Cathy Woods who do with tens or hundreds of billions of dollars of capital, but they are few enough relative to the investment opportunities available that they are not setting e.g. the prices for the semiconductor industry. I don’t see the point of indirect arguments from interest rates for the possibility that investors or the market as a whole could believe in AGI soon but only versions where owning the AI chips or AI developers won’t pay off, when at the object level that possibility is known to be false.
Carl, I agree with everything you’re saying, so I’m a bit confused about why you think you disagree with this post.
This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near.
I argue that we can’t use interest rates to judge if said, specific scenario is near or not. That doesn’t mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that’s evidence for a claim that “traders expect these technologies to become more powerful in the near-ish future”. Talking to investors provides further evidence in the same direction—I just left McKinsey, so up until recently I’ve had plenty of those conversations myself.
So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren’t strong evidence either way.
Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible
I disagree with the idea that short AI timelines are not investable (although I agree interest rates are a bad and lagging indicator vs AI stocks). People foreseeing increased expectations of AI sales as a result of scaling laws, shortish AI timelines, and the eventual magnitude of success have already made a lot of money investing in Nvidia, DeepMind and OpenAI. Incremental progress increases those expectations, and they can increase even in worlds where AGI winds up killing or expropriating all investors so long as there is some expectation of enough investors thinking ownership will continue to matter. In practice I know lots of investors expecting near term TAI who are betting on it (in AI stocks, not interest rates, because the returns are better). They also are more attracted to cheap 30 year mortgages and similar sources of mild cheap leverage. They put weight on worlds where society is not completely overturned and property rights matter after AGI, as well as during an AGI transition (e.g. consider that a coalition of governments wanting to build AGI is more likely to succeed earlier and more safely with more compute and talent available to it, so has reason to make credible promises of those who provide such resources actually being compensated for doing so post-AGI, or the philanthropic value of being able to donate such resources).
And at the object level from reading statements from investors and talking to them, investors weighted by trading in AI stocks (and overwhelmingly for the far larger bond market setting interest rates) largely don’t have short AI timelines (confident enough to be willing to invest on) or expect explosive growth in AI capabilities. There are investors like Cathy Woods who do with tens or hundreds of billions of dollars of capital, but they are few enough relative to the investment opportunities available that they are not setting e.g. the prices for the semiconductor industry. I don’t see the point of indirect arguments from interest rates for the possibility that investors or the market as a whole could believe in AGI soon but only versions where owning the AI chips or AI developers won’t pay off, when at the object level that possibility is known to be false.
Carl, I agree with everything you’re saying, so I’m a bit confused about why you think you disagree with this post.
This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near.
I argue that we can’t use interest rates to judge if said, specific scenario is near or not. That doesn’t mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that’s evidence for a claim that “traders expect these technologies to become more powerful in the near-ish future”. Talking to investors provides further evidence in the same direction—I just left McKinsey, so up until recently I’ve had plenty of those conversations myself.
So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren’t strong evidence either way.
It seems to me like you disagree with Carl because you write:
So you’re saying that investors can’t win from betting on near-term TAI. But Carl thinks they can win.
As Tom says, sorry if I wasn’t clear.
Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible
I have now updated the post to reflect this