While this is a very valuable post, I don’t think the core argument quite holds, for the following reasons:
Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in “The Big Short” about the Financial Crisis).
In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that’s not the same as making a billion bucks.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
All that said, it is possible that the strategy of “people with a high x-risk estimate should use long-term loans to fund their work” is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich—it would just be a bet to survive, although you could get poor in the process.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this.
This reasoning sounds pretty tortured to me.
First, should you really believe that the relatively small number of traders needed to move markets won’t come to think AI is a really big deal, given that you think AI is a really big deal?
Second, if “the world won’t realize AI is near for a while,” you can still make money by following analogous strategies to those described in the post. You don’t need the world to realize tomorrow.
I see that I wasn’t being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
I don’t think that you were being unclear above. The underlying reasoning still feels a little tortured to me.
The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
I mean, sure, it could be, but wouldn’t it be weird to believe this confidently? The artists are storming parliament, the accountants are on the dole, foom just around the corner—but a small number of traders have not yet clocked that an important change is coming?
It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb. They will understand the logic of this post. A mass ignoring of interest rates in favor of tech equity investing is not a stable equilibrium.
In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
In order to get the benefits of the best case of anything, you need to take on risk. You could make the same directional bet with less risk. If you weaken this statement to “exposure to a good chunk of the benefits of the implications of their beliefs, by taking on reasonable risk” then the interest rate conclusion still goes through.
At least, the small number of traders necessary to move the market are not dumb. They will understand the logic of this post. A mass ignoring of interest rates in favor of tech equity investing is not a stable equilibrium.
Could you try to give an estimate as to how much money would be necessary to move the markets? I’m not particularly familiar with the Treasuries market, but I’m not convinced that a small number of traders or even a few billion dollars per year in “smart money” could significantly change it, at least not enough to send signals separate from surrounding noise about the views.
I think I’ll try and type up my objections in a post rather than a comment—it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.
But in short, I think it’s possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you’d at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian “foom” scenario to happen overnight for the following point to be plausible: “timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won’t make sense to bet on interest rate movements for most people”)
While this is a very valuable post, I don’t think the core argument quite holds, for the following reasons:
Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in “The Big Short” about the Financial Crisis).
In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that’s not the same as making a billion bucks.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
All that said, it is possible that the strategy of “people with a high x-risk estimate should use long-term loans to fund their work” is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich—it would just be a bet to survive, although you could get poor in the process.
This reasoning sounds pretty tortured to me.
First, should you really believe that the relatively small number of traders needed to move markets won’t come to think AI is a really big deal, given that you think AI is a really big deal?
Second, if “the world won’t realize AI is near for a while,” you can still make money by following analogous strategies to those described in the post. You don’t need the world to realize tomorrow.
I see that I wasn’t being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
I don’t think that you were being unclear above. The underlying reasoning still feels a little tortured to me.
I mean, sure, it could be, but wouldn’t it be weird to believe this confidently? The artists are storming parliament, the accountants are on the dole, foom just around the corner—but a small number of traders have not yet clocked that an important change is coming?
Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb. They will understand the logic of this post. A mass ignoring of interest rates in favor of tech equity investing is not a stable equilibrium.
In order to get the benefits of the best case of anything, you need to take on risk. You could make the same directional bet with less risk. If you weaken this statement to “exposure to a good chunk of the benefits of the implications of their beliefs, by taking on reasonable risk” then the interest rate conclusion still goes through.
Could you try to give an estimate as to how much money would be necessary to move the markets? I’m not particularly familiar with the Treasuries market, but I’m not convinced that a small number of traders or even a few billion dollars per year in “smart money” could significantly change it, at least not enough to send signals separate from surrounding noise about the views.
I think I’ll try and type up my objections in a post rather than a comment—it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.
But in short, I think it’s possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you’d at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian “foom” scenario to happen overnight for the following point to be plausible: “timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won’t make sense to bet on interest rate movements for most people”)