Lots of the comments here are pointing at details of the markets and whether it’s possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there’s a simple way to look at it that’s very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies’ target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.
I think what you’re claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don’t contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn’t find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren’t mystical, they’re aggregations of traders’ cognition. If the cognition isn’t there, then the market price can’t tell you anything. If the cognition is there but it doesn’t control enough of the capital to move the price, then the price can’t tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don’t have much of a model of what cognition is behind those prices.
I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.
I don’t think this post is decisive evidence against short timelines. But neither do I think it’s a “trap” that relies on fully swallowing EMH. I think there’re deeper issues to unpack here about why much of the world doesn’t seem to put much weight on AGI coming any time soon.
Just a note on Jane Street in particular—nobody at Jane Street is making a potentially multi year bet on interest rates with Jane Street money. That’s simply not in the category of things that Jane Street trades. If someone at Jane Street wanted to make betting on this a significant part of what they do, they’d have to leave and go elsewhere and find someone to give them at least hundreds of millions of dollars to make the bet.
Yeah, I’m also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we’ve seen in 2022.
It’s unclear to me that just because the number/liquidity of traders “in the know” is not very small (e.g., it is more than 0.1% of capital) this leads to the market correcting itself. At least, I have some reservations about what I interpret to be the causal process. Suppose that some set of early investors correctly think that ~3% of investors will adopt their own reasoning and engage in similar actions (e.g., “shorting” the long-term bond market) about 6 years before AGI.
But despite all of their reasoning, a very large portion of capital-weighted investors still don’t believe (A) the whole AGI argument, or (B) that there’s much worth doing once they do believe the whole AGI argument (e.g., “well, I guess I should just try not to die before AGI and enjoy my last normal years with my family/friends”).
I see a few potential problems, but am not sure about enough details to know whether the market would suffer from these problems:
It seems plausible that large institutional investors will just balance against any large uptick early on, preventing investors from getting much of any profits in the first 10 or so years (leaving only 5-ish years for profits to start accumulating (albeit without considering discount rates));
Even once the potential for profit opens up or even if the previous point doesn’t apply very strongly, some investors might eventually think they’ll be left “holding the bag” if they ever run into a multi-year plateau in beliefs/capital movement. This could be a scenario where most of the “AGI-open-minded” investors have been tapped, but most other people in society are still skeptical (I.e., it isn’t a smooth distribution of open-mindedness). Short-term profit relies on the rates increasing after you go short, but if you don’t expect the rates to increase then you won’t enter the market and adjust the prices. But the expectation that the person after you might also have this reasoning in its recursive form disincentivizes you from entering, creating a cascading effect.
“Well, I’ll profit eventually, even if it takes 10 years of waiting”—not necessarily, or at that point you may not really enjoy the profits, as it may be “I have 8-figure assets but 3 years left of (normal) life.” I’m not confident that this is a sufficiently appealing offer to the people who could take you up on it and move the market.
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into standard human conceptions of risk, which could be planned & prepared for effectively at small expense, and whose absolute progress human by human could be recorded in real-time… If the worst-case AI x-risk happened, it would be hard for every reason that corona was easy. When we speak of “fast takeoffs”, I increasingly think we should clarify that apparently, a “fast takeoff” in terms of human coordination means any takeoff faster than ‘several decades’ will get inside our decision loops. -- Gwern
Those investors who limit themselves to what seems normal and reasonable in light of human history are unprepared for the age of miracle and wonder in which they now find themselves. The twentieth century was great and terrible, and the twenty-first century promises to be far greater and more terrible. …The limits of a George Soros or a Julian Robertson, much less of an LTCM, can be attributed to a failure of the imagination about the possible trajectories for our world, especially regarding the radically divergent alternatives of total collapse and good globalization.
The markets reacted appropriately to covid. Match the Dow to forecasters’ and EAF’s prognostications and you’ll find that the markets moved in tandem with rational expectations.
The Dow plateaued in early January and crashed starting Feb 20th, tracking rational expectations and three weeks ahead of media/mass awareness, which only caught up around March 12th
Almost everyone I knew was concerned with the pandemic going global and dramatically disrupting our lives much sooner than Feb 20th. On January 26th, a post on the EA Forum, “Concerning the Recent 2019-Novel Coronavirus Outbreak”, made the case we should be worried. By a few weeks later than that, everyone I know was already bracing for covid to hit the US. Looking back at my house Discord server, we had the “if we have to go weeks without leaving the house, is there anything we’d run out of? Let’s buy it now” conversation February 6th (which is also when my Vox article about Covid published, in which I quote a source saying“Instead of deriding people’s fears about the Wuhan coronavirus, I would advise officials and reporters to focus more on the high likelihood that things will get worse and the not-so-small possibility that they will get much worse.”)
The late January SlateStarCodex open threads also typically contained 10-20 comments discussing the virus, linking prediction markets, debating the odds of more than 500k deaths and how people in various places should expect disruptions to their daily life. (“‘If everyone involved massively bungles absolutely everything, this would be pretty-bad-but-not-apocalyptic.’, a commentator argued on January 29th.)
In late January/early February, I think attitudes were that the virus was a big deal but still more likely than not to be successfully contained, though people should prepare just in case. I think people with our knowledge state wouldn’t’ve bet confidently on a failure of containment on January 30th (some did, but it wasn’t the median community stance), but the markets would have started moving in that direction steadily from very early in February.
I think financial markets not responding until Feb 20th was a clear case of markets doing substantially worse than the people around me.
I think financial markets not responding until Feb 20th was a clear case of markets doing substantially worse than the people around me.
As someone that knows nothing about finance, I don’t understand this point. If you had bought S&P500 on Feb 20th 2020 you would be up 20% today, so the market not reacting does not seem that irrational in hindsight? Also, US GDP didn’t seem to change that much in 2020 and 2021?
I guess VIX options might have been underpriced, but I think you would need to time them pretty precisely around march?
I know some people in the community made a bunch of money, but in periods of high volatility I expect many people to make some money and many people to lose some money (for example when the market immediately recovered while still in the middle of a pandemic).
I’m not totally sure what I think the correct market behavior based on knowable information was, but it seems very hard to make the case that a large crash on Feb 20th is evidence of the markets moving “in tandem with rational expectations”.
“ A couple weeks ago, I started investigating the response, here and in the stock market, to COVID-19. I found that LessWrong’s conversation took off about a week after the stock market started to crash. Given what we knew about COVID-19 prior to Feb. 20th, when the market first started to decline, I felt that the stock market’s reaction was delayed. And of course, there’d been plenty of criticism of the response of experts and governments. But I was playing catch-up. I certainly was not screaming about COVID-19 until well after that time.
Today, I found the most detailed timeline I’ve seen of confirmed cases around the world. It goes day by day and country by country, from Jan. 13th to the end of March.
That timeline shows that Feb. 21st was the first date when at least 3 countries besides China had 10+ new confirmed cases in a single day (Japan, South Korea, Italy, and Iran).
That changes my interpretation of the stock market crash dramatically. Investors weren’t failing to synthesize the early information or waiting for someone to yell “fire!” They were waiting to see confirmed international community spread, rather than just a few cases popping up here and there. Once they saw that early evidence, the sell-off began, and it continued in tandem, day by day, with the evidence of community spread in new countries and the exponential growth of COVID-19 cases in countries where it was already established.”
The plateau beginning early January could be read as an initial reaction to covid.
I wouldn’t expect the markets to react in tandem with the most alarmist rationalists. I participated in a rationalist prediction tournament in mid-January 2020 where only one participant gave COVID >50% odds of killing 10000 people. The EAF post you linked was an unusual view at the time, as were Travis W Fisher’s comments at Metaculus. I grant that the rationalist consensus preceded the market’s reaction, but only by days.
Yet the tweets you linked were from 2⁄16 and 2⁄17.
Rational expectations doesn’t mean “the alarmists are always right,” and EMH doesn’t imply that no one can profit helping correct the market.
The tweets you linked demonstrate the confusion at the time. Robin thought that China would be overwhelmed with COVID in a few months, while the rest of the world would be closing contact. In fact the rest of the world got overwhelmed with COVID and crashed their economies in just one month, while China contained it and kept its economy rolling for another two years. Rational expectations would’ve incorporated views like Robin’s, but not parroted them. A plateau from early January and crash on 2⁄20 isn’t inconsistent with that.
It doesn’t seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they’re approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they’re ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it’s fairly commonly known that it’s hard to find ongoing, large-scale biases in financial markets.
The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There’s no assumption that markets are a “mystical source of information” just that if you bet against them you almost always lose.
There’s also a nice “put your money where you mouth is” takeaway from the post, which AFAIK few short timelines people are doing.
I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like “transformative AI is coming soon”, and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn’t otherwise.) But I think you are right that basically no market participants have a probability estimate for transformative AI (or almost anything else) 30 years out; they aren’t trying to make predictions that far out and don’t expect to do significantly better than noise if they did try.
Lots of the comments here are pointing at details of the markets and whether it’s possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there’s a simple way to look at it that’s very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies’ target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
I think what you’re claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don’t contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn’t find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren’t mystical, they’re aggregations of traders’ cognition. If the cognition isn’t there, then the market price can’t tell you anything. If the cognition is there but it doesn’t control enough of the capital to move the price, then the price can’t tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don’t have much of a model of what cognition is behind those prices.
I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.
I don’t think this post is decisive evidence against short timelines. But neither do I think it’s a “trap” that relies on fully swallowing EMH. I think there’re deeper issues to unpack here about why much of the world doesn’t seem to put much weight on AGI coming any time soon.
Plenty of people at Jane Street that read LessWrong
Just a note on Jane Street in particular—nobody at Jane Street is making a potentially multi year bet on interest rates with Jane Street money. That’s simply not in the category of things that Jane Street trades. If someone at Jane Street wanted to make betting on this a significant part of what they do, they’d have to leave and go elsewhere and find someone to give them at least hundreds of millions of dollars to make the bet.
Jane street even hosted a foom debate between between Hanson and yudkowsky iirc.
(I don’t think this is substantial evidence on the validity of original post)
Yeah, I’m also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we’ve seen in 2022.
Thanks for this—I think you put really nicely the interpretation that we also are pushing for.
It’s unclear to me that just because the number/liquidity of traders “in the know” is not very small (e.g., it is more than 0.1% of capital) this leads to the market correcting itself. At least, I have some reservations about what I interpret to be the causal process. Suppose that some set of early investors correctly think that ~3% of investors will adopt their own reasoning and engage in similar actions (e.g., “shorting” the long-term bond market) about 6 years before AGI.
But despite all of their reasoning, a very large portion of capital-weighted investors still don’t believe (A) the whole AGI argument, or (B) that there’s much worth doing once they do believe the whole AGI argument (e.g., “well, I guess I should just try not to die before AGI and enjoy my last normal years with my family/friends”).
I see a few potential problems, but am not sure about enough details to know whether the market would suffer from these problems:
It seems plausible that large institutional investors will just balance against any large uptick early on, preventing investors from getting much of any profits in the first 10 or so years (leaving only 5-ish years for profits to start accumulating (albeit without considering discount rates));
Even once the potential for profit opens up or even if the previous point doesn’t apply very strongly, some investors might eventually think they’ll be left “holding the bag” if they ever run into a multi-year plateau in beliefs/capital movement. This could be a scenario where most of the “AGI-open-minded” investors have been tapped, but most other people in society are still skeptical (I.e., it isn’t a smooth distribution of open-mindedness). Short-term profit relies on the rates increasing after you go short, but if you don’t expect the rates to increase then you won’t enter the market and adjust the prices. But the expectation that the person after you might also have this reasoning in its recursive form disincentivizes you from entering, creating a cascading effect.
“Well, I’ll profit eventually, even if it takes 10 years of waiting”—not necessarily, or at that point you may not really enjoy the profits, as it may be “I have 8-figure assets but 3 years left of (normal) life.” I’m not confident that this is a sufficiently appealing offer to the people who could take you up on it and move the market.
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
Peter Thiel (in his “Optimistic Thought Experiment” essay about investing under anthropic shadow, which I analyzed in a Forum post) also thinks that there is a “failure of imagination” going on here, similar to what Gwern describes:
The markets reacted appropriately to covid. Match the Dow to forecasters’ and EAF’s prognostications and you’ll find that the markets moved in tandem with rational expectations.
Not only have I never heard this before, I was there and remember watching this not happen. Source?
https://www.marketwatch.com/investing/index/djia
The Dow plateaued in early January and crashed starting Feb 20th, tracking rational expectations and three weeks ahead of media/mass awareness, which only caught up around March 12th
Almost everyone I knew was concerned with the pandemic going global and dramatically disrupting our lives much sooner than Feb 20th. On January 26th, a post on the EA Forum, “Concerning the Recent 2019-Novel Coronavirus Outbreak”, made the case we should be worried. By a few weeks later than that, everyone I know was already bracing for covid to hit the US. Looking back at my house Discord server, we had the “if we have to go weeks without leaving the house, is there anything we’d run out of? Let’s buy it now” conversation February 6th (which is also when my Vox article about Covid published, in which I quote a source saying“Instead of deriding people’s fears about the Wuhan coronavirus, I would advise officials and reporters to focus more on the high likelihood that things will get worse and the not-so-small possibility that they will get much worse.”)
The late January SlateStarCodex open threads also typically contained 10-20 comments discussing the virus, linking prediction markets, debating the odds of more than 500k deaths and how people in various places should expect disruptions to their daily life. (“‘If everyone involved massively bungles absolutely everything, this would be pretty-bad-but-not-apocalyptic.’, a commentator argued on January 29th.)
In late January/early February, I think attitudes were that the virus was a big deal but still more likely than not to be successfully contained, though people should prepare just in case. I think people with our knowledge state wouldn’t’ve bet confidently on a failure of containment on January 30th (some did, but it wasn’t the median community stance), but the markets would have started moving in that direction steadily from very early in February.
I think financial markets not responding until Feb 20th was a clear case of markets doing substantially worse than the people around me.
I agree with most of this comment, but
As someone that knows nothing about finance, I don’t understand this point.
If you had bought S&P500 on Feb 20th 2020 you would be up 20% today, so the market not reacting does not seem that irrational in hindsight? Also, US GDP didn’t seem to change that much in 2020 and 2021?
I guess VIX options might have been underpriced, but I think you would need to time them pretty precisely around march?
I know some people in the community made a bunch of money, but in periods of high volatility I expect many people to make some money and many people to lose some money (for example when the market immediately recovered while still in the middle of a pandemic).
I’m not totally sure what I think the correct market behavior based on knowable information was, but it seems very hard to make the case that a large crash on Feb 20th is evidence of the markets moving “in tandem with rational expectations”.
Here’s what I wrote in April 2020 on that topic:
“ A couple weeks ago, I started investigating the response, here and in the stock market, to COVID-19. I found that LessWrong’s conversation took off about a week after the stock market started to crash. Given what we knew about COVID-19 prior to Feb. 20th, when the market first started to decline, I felt that the stock market’s reaction was delayed. And of course, there’d been plenty of criticism of the response of experts and governments. But I was playing catch-up. I certainly was not screaming about COVID-19 until well after that time.
Today, I found the most detailed timeline I’ve seen of confirmed cases around the world. It goes day by day and country by country, from Jan. 13th to the end of March.
That timeline shows that Feb. 21st was the first date when at least 3 countries besides China had 10+ new confirmed cases in a single day (Japan, South Korea, Italy, and Iran).
That changes my interpretation of the stock market crash dramatically. Investors weren’t failing to synthesize the early information or waiting for someone to yell “fire!” They were waiting to see confirmed international community spread, rather than just a few cases popping up here and there. Once they saw that early evidence, the sell-off began, and it continued in tandem, day by day, with the evidence of community spread in new countries and the exponential growth of COVID-19 cases in countries where it was already established.”
https://www.lesswrong.com/posts/EdJD3v2uxLrL3pA75/my-stumble-on-covid-19
The plateau beginning early January could be read as an initial reaction to covid.
I wouldn’t expect the markets to react in tandem with the most alarmist rationalists. I participated in a rationalist prediction tournament in mid-January 2020 where only one participant gave COVID >50% odds of killing 10000 people. The EAF post you linked was an unusual view at the time, as were Travis W Fisher’s comments at Metaculus. I grant that the rationalist consensus preceded the market’s reaction, but only by days.
Numerous people on rationalityTwitter called it way before Feb 20th, and some of those bought put options and made big profits. This must be some interesting new take on “rational expectations”. https://twitter.com/ESYudkowsky/status/1229529150098046976?s=20&t=IGOl9Mzj1FYtcPYd1F52AQ
Yet the tweets you linked were from 2⁄16 and 2⁄17.
Rational expectations doesn’t mean “the alarmists are always right,” and EMH doesn’t imply that no one can profit helping correct the market.
The tweets you linked demonstrate the confusion at the time. Robin thought that China would be overwhelmed with COVID in a few months, while the rest of the world would be closing contact. In fact the rest of the world got overwhelmed with COVID and crashed their economies in just one month, while China contained it and kept its economy rolling for another two years. Rational expectations would’ve incorporated views like Robin’s, but not parroted them. A plateau from early January and crash on 2⁄20 isn’t inconsistent with that.
It doesn’t seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they’re approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they’re ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it’s fairly commonly known that it’s hard to find ongoing, large-scale biases in financial markets.
The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There’s no assumption that markets are a “mystical source of information” just that if you bet against them you almost always lose.
There’s also a nice “put your money where you mouth is” takeaway from the post, which AFAIK few short timelines people are doing.
I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like “transformative AI is coming soon”, and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn’t otherwise.) But I think you are right that basically no market participants have a probability estimate for transformative AI (or almost anything else) 30 years out; they aren’t trying to make predictions that far out and don’t expect to do significantly better than noise if they did try.
(Even if for some reason you’re wrong for the case of transformative AI specifically, your comment still made me smarter, so thanks! :) )