I’ve been discussing this concept for some time now, so I’m glad to see some people take a more formal stab at it. However, I must say that I’m overall disappointed with this post. I’ll just lay out a few summary points, and if people are actually still reading this deep into the comments and want to hear more thoughts, I can oblige later:
With the *slight* exception of the “you could be earning alpha” section, it does not really get deep into the causal mechanisms for why you should expect markets to be efficient.
I think this post should have done a better job of aggregating and responding to contrary viewpoints; I feel like the post largely bypassed the key arguments (cruxes) of existing critics and went straight to people who were not familiar with the EMH+AGI debate , especially with all the references to empirical evidence (see next point). Granted, “better job” implies that the article did this at all, which I don’t recall it really doing, aside from occasional references to other viewpoints (IIRC).
The empirical sections I thought were decent, but they missed the crux of the debate.
The fact is, we don’t seem to have much of any precedent of this kind of scenario, with some debatable exceptions regarding the Cold War / Cuban Missile Crisis—yet the authors didn’t even spend that much time focusing on these examples which seemed to be the most relevant.
Overall, I thought that the empirical sections were not very helpful for the debate, aside from perhaps targeting audiences who are the very early stage of the debate and hastily think “I’ll dismiss EMH in general because of X.” (I am normally a big proponent of EMH-style reasoning; it’s not like I and many other people I know that are part of this EMH+AGI debate are saying “EMH has never worked!”[1]).
One cross-cutting objection off the top of my head is that the people with Special And Justified Knowledge may not be able to profit fast enough to correct the market. [I have read other comments’ responses, and respond to one response in the next point] This especially applies to two closely related causal mechanisms of the EMH, profit snowballing and dogpiling: “Suppose you have someone who has better insights than everyone else about some asset. They may not be rich and for various related reasons they are unable to immediately correct the market (i.e., the market is actually temporarily inefficient). However, if they are right/superior, they either a) can keep profiting over and over again until they become liquid/rich enough to individually correct the market, and/or b) other people see that this person is profiting over and over again so they jump in and contribute to market correction.” The problem is that it might be the case that there is only one or two cycles for profit with AGI until the world goes crazy, but it could take many years for this strategy to actually profit, during which time the market will be “temporarily” inefficient. If real interest rates don’t rise for 15 years, and only start to rise ~5 years before AGI, the market is inefficient for 15 years because small players can’t profit to fix the situation.
“But those are just two causal mechanisms,” the authors/defenders hypothetically reply, “and sometimes the market still corrects even without those mechanisms; look at the big short! And there are probably enough AI-conscious investors such that they could alter the market...”
First, I think it’s worthwhile to highlight my view that the debate can unproductively explode at this point because the original authors didn’t (IMO) do a good job of laying out their own causal mechanisms. This forces critics into a game of whack-a-mole filled with delays at the need to comment, wait for responses, address new causal mechanisms, parse out alternate branches of disagreement, etc. (However, I think the following subpoint addresses a fairly large part of the debate)
Second, I don’t think that the authors did a good job of differentiating between “sudden surprise takeoffs” (e.g., ~1 year of warning time and ~1/3rd of people believe this) vs. “forecastable takeoffs” (e.g., ~10 years of warning time and >1/10th of people believe this). This seems somewhat cruxy in at least one direction—against the authors’ viewpoint. Ultimately, (correct me if I’m wrong) it seems that the authors’ proposed strategy for profit relies on the belief that as you get closer to expected AGI date, more/richer people will start to agree with your predictions (and still see benefits from getting in on profit): otherwise, prescient investors could believe “AI is very likely to occur around year X, but very few people or institutions will recognize this before X-3 years, such that real interest rates probably won’t change much at all until it’s too late, and when they do change
The counterparty/non-payment risk may be high;
I prefer a 50% chance of being moderately wealthy for 10 years to a 50% chance of being really rich for ~2 years before I die (with a 50% chance of being poor for 10 years if I bet big and am wrong);
The world might experience chaos which undermines my ability to spend money on things I value, etc.”
Third, I don’t think the claim that “there are probably enough AI-conscious investors...” is supported in this post, and I’m hesitant on this point. I am willing to budge, and this could be a fairly important point if we are in a “forecastable/slow takeoff” scenario, but I would like to see the post focusing on that leaf of the debate, not trying to recreate the trunk of the debate tree. And again, if we are in a “sudden short timeline” scenario, I suspect that this possibility doesn’t matter all that much.
Sure, some people may hold this view, but a) I’m skeptical you’ll convince them with this article, and b) you can’t just focus on empirics and then declare victory when there are still many critics who have objections you haven’t directly addressed.
I’ve been discussing this concept for some time now, so I’m glad to see some people take a more formal stab at it. However, I must say that I’m overall disappointed with this post. I’ll just lay out a few summary points, and if people are actually still reading this deep into the comments and want to hear more thoughts, I can oblige later:
With the *slight* exception of the “you could be earning alpha” section, it does not really get deep into the causal mechanisms for why you should expect markets to be efficient.
I think this post should have done a better job of aggregating and responding to contrary viewpoints; I feel like the post largely bypassed the key arguments (cruxes) of existing critics and went straight to people who were not familiar with the EMH+AGI debate , especially with all the references to empirical evidence (see next point).
Granted, “better job” implies that the article did this at all, which I don’t recall it really doing, aside from occasional references to other viewpoints (IIRC).
The empirical sections I thought were decent, but they missed the crux of the debate.
The fact is, we don’t seem to have much of any precedent of this kind of scenario, with some debatable exceptions regarding the Cold War / Cuban Missile Crisis—yet the authors didn’t even spend that much time focusing on these examples which seemed to be the most relevant.
Overall, I thought that the empirical sections were not very helpful for the debate, aside from perhaps targeting audiences who are the very early stage of the debate and hastily think “I’ll dismiss EMH in general because of X.”
(I am normally a big proponent of EMH-style reasoning; it’s not like I and many other people I know that are part of this EMH+AGI debate are saying “EMH has never worked!”[1]).
One cross-cutting objection off the top of my head is that the people with Special And Justified Knowledge may not be able to profit fast enough to correct the market. [I have read other comments’ responses, and respond to one response in the next point]
This especially applies to two closely related causal mechanisms of the EMH, profit snowballing and dogpiling: “Suppose you have someone who has better insights than everyone else about some asset. They may not be rich and for various related reasons they are unable to immediately correct the market (i.e., the market is actually temporarily inefficient). However, if they are right/superior, they either a) can keep profiting over and over again until they become liquid/rich enough to individually correct the market, and/or b) other people see that this person is profiting over and over again so they jump in and contribute to market correction.”
The problem is that it might be the case that there is only one or two cycles for profit with AGI until the world goes crazy, but it could take many years for this strategy to actually profit, during which time the market will be “temporarily” inefficient. If real interest rates don’t rise for 15 years, and only start to rise ~5 years before AGI, the market is inefficient for 15 years because small players can’t profit to fix the situation.
“But those are just two causal mechanisms,” the authors/defenders hypothetically reply, “and sometimes the market still corrects even without those mechanisms; look at the big short! And there are probably enough AI-conscious investors such that they could alter the market...”
First, I think it’s worthwhile to highlight my view that the debate can unproductively explode at this point because the original authors didn’t (IMO) do a good job of laying out their own causal mechanisms. This forces critics into a game of whack-a-mole filled with delays at the need to comment, wait for responses, address new causal mechanisms, parse out alternate branches of disagreement, etc.
(However, I think the following subpoint addresses a fairly large part of the debate)
Second, I don’t think that the authors did a good job of differentiating between “sudden surprise takeoffs” (e.g., ~1 year of warning time and ~1/3rd of people believe this) vs. “forecastable takeoffs” (e.g., ~10 years of warning time and >1/10th of people believe this). This seems somewhat cruxy in at least one direction—against the authors’ viewpoint. Ultimately, (correct me if I’m wrong) it seems that the authors’ proposed strategy for profit relies on the belief that as you get closer to expected AGI date, more/richer people will start to agree with your predictions (and still see benefits from getting in on profit): otherwise, prescient investors could believe “AI is very likely to occur around year X, but very few people or institutions will recognize this before X-3 years, such that real interest rates probably won’t change much at all until it’s too late, and when they do change
The counterparty/non-payment risk may be high;
I prefer a 50% chance of being moderately wealthy for 10 years to a 50% chance of being really rich for ~2 years before I die (with a 50% chance of being poor for 10 years if I bet big and am wrong);
The world might experience chaos which undermines my ability to spend money on things I value, etc.”
Third, I don’t think the claim that “there are probably enough AI-conscious investors...” is supported in this post, and I’m hesitant on this point. I am willing to budge, and this could be a fairly important point if we are in a “forecastable/slow takeoff” scenario, but I would like to see the post focusing on that leaf of the debate, not trying to recreate the trunk of the debate tree. And again, if we are in a “sudden short timeline” scenario, I suspect that this possibility doesn’t matter all that much.
Sure, some people may hold this view, but a) I’m skeptical you’ll convince them with this article, and b) you can’t just focus on empirics and then declare victory when there are still many critics who have objections you haven’t directly addressed.