It’s a stronger claim because “most people don’t expect AGI” implies “markets don’t expect AGI”
I’m not sure that’s true. Markets often price things that only a minority of people know or care about. See the lithium example in the original post. That was a case where “most people didn’t know lithium was used in the H-bomb” didn’t imply that “markets didn’t know lithium was used in the H-bomb”
Market prices are not a democracy. The logic for the efficiency of markets is emphatically NOT ‘wisdom of the crowds’. It’s that the most knowledgeable traders have the most to gain from trading, and so do so, and determine the price. (I have a riff on this here)
Hm, Rohin has some caveats elaborating on his claim.
(Not literally so—you can construct scenarios like “only investors expect AGI while others don’t” where most people don’t expect AGI but the market does expect AGI—but these seem like edge cases that clearly don’t apply to reality.)
Unless they were edited in after these comments were written (which doesn’t seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.
If you already knew that belief in AGI soon was a very contrarian position (including amongst the most wealthy, smart, and influential people), I don’t think you should update at all on the fact that the market doesn’t expect AGI.
I want to maximally push back on views like this. The economic logic for the informational efficiency of markets has nothing to do with consensus or ‘non-contrarianness’. Markets are informationally efficient because of the incentive for those who are most informed to trade.
The argument here emphatically cannot be merely summarized as “AGI soon [is] a very contrarian position [and market prices are another indication of this]”.
If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.
Yes, it’s weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren’t enough to drastically change asset prices against the counter trades of other investors.
There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they’re not going to consider it). E.g. they don’t report probabilities it would happen or make any estimates of the profits it would produce.
Rohin is right that AGI by the 2030s is a contrarian view, and that there’s likely less than $1T of investor capital that buys that view and selects investments based on it.
I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn’t overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they were swamped by partisan ‘dumb money.’ The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they’re still not enough for the most informed people to become a large share of the relevant pricing views on these assets.
2. You offer what is effectively a full general argument against market prices ever being swayed by anything—a bit more on this point here. Price changes do not need to be driven by volume! (cf the no-trade theorem, for the conceptual idea)
3. I’m not sure if this is exactly your point about prediction markets (or if you really want to talk about total capital, on which see again #2), but:
Sovereign debt markets are orders of magnitude larger than PredictIt or other political prediction markets. These are not markets where individual traders are capped to $600 max positions and shorting is limited (or whatever the precise regulations are)! Finding easy trades in these markets is …not easy.
But the stocks are the more profitable and capital-efficient investment, so that’s where you see effects first on market prices (if much at all) for a given number of traders buying the investment thesis. That’s the main investment on this basis I see short timelines believers making (including me), and has in fact yielded a lot of excess returns since EAs started to identify it in the 2010s.
I don’t think anyone here is arguing against the no-trade theorem, and that’s not an argument that prices will never be swayed by anything, but that you can have a sizable amount of money invested on the AGI thesis before it sways prices. Yes, price changes don’t need to be driven by volume if no one wants to trade against. But plenty of traders not buying AGI would trade against AGI-driven valuations, e.g. against the high P/E ratios that would ensue. Rohin is saying not that the majority of investment capital that doesn’t buy AGI will sit on the sidelines but will trade against the AGI-driven bet, e.g. by selling assets at elevated P/E ratios. At the moment there is enough $ trading against AGI bets that market prices are not in line with the AGI bet valuations. I recognize that means the outside view EMH heuristic of going with the side trading more $ favors no AGI, but I think based on the object level that the contrarian view here is right.
It’s just a simple illustration that you can have correct minorities that have not yet been able to grow by profit or imitation to correct prices. And the election mispricings also occurred in uncapped crypto prediction markets (although the hassle of executing very quickly there surely deterred or delayed institutional investors), which is how some made hundreds of thousands or millions of dollars there.
The argument here emphatically cannot be merely summarized as “AGI soon [is] a very contrarian position [and market prices are another indication of this]”.
Can you describe in concrete detail a possible world in which:
“AGI in 30 years” is a very contrarian position, including amongst hedge fund managers, bankers, billionaires, etc
Market prices indicate that we’ll get AGI in 30 years
It seems to me that if you were in such a situation, all of the non-contrarian hedge fund managers, bankers, billionaires would do the opposite of all of the trades that you’ve listed in this post, which would then push market prices back to rejecting “AGI in 30 years”; they have more money so their views dominate. What, concretely, prevents that from happening?
Minor (yet longwinded!) comment: FWIW, I think that:
Rohin’s comment seems useful
Stephen’s and your rebuttal also seem useful
Stephen’s and your rebuttal does seem relevant to what Rohin said even with his caveat included, rather than replying to a strawman
But the phrasing of your latest comment[1] feels to me overconfident, or somewhat like it’s aiming at rhetorical effect rather than just sharing data and inferences, or somewhat soldier-mindset-y
In particular, personally I dislike the use of “110%”, “maximally”, and maybe “emphatically”.
My intended vibe here isn’t “how dare you” or “this is a huge deal”.
I’m not at all annoyed at you for writing that way, I (think I) can understand why you did (I think you’re genuinely confident in your view and feel you already explained it, and want to indicate that?), and I think your tone in this comment is significant less important than your post itself.
But I do want to convey that I think debates and epistemics on the Forum will typically be better if people avoid adding such flourishes/absolutes/emphatic-ness in situations like this (e.g., where the writing shouldn’t be optimized for engagingness or persuasion but rather collaborative truth-seeking, and where the disagreed-with position isn’t just totally crazy/irrelevant). And I guess what I’d prefer pushing toward is a mindset of curiosity about what’s causing the disagreement and openness to one’s own view also shifting.
(I should flag that I didn’t read the post very carefully, haven’t read all the comments, and haven’t formed a stable/confident view on this topic. Also I’m currently sleep-deprived and expect my reasoning isn’t super clear unfortunately.)
I also think the comment is overconfident in substance, but that’s something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)
Unless they were edited in after these comments were written
For the record, these were not edited in after seeing the replies. (Possibly I edited them in a few minutes after writing the comment—I do that pretty frequently—but if so it was before any of the replies were written, and very likely before any of the repliers had seen my comment.)
At times like these, I don’t really know why I bother to engage on the EA Forum, given that people seem to be incapable of engaging with the thing I wrote instead of some totally different thing in their head.
I’ll just pop back in here briefly to say that (1) I have learned a lot from your writing over the years, (2) I have to say I still cannot see how I misinterpreted your comment, and (3) I genuinely appreciate your engagement with the post, even if I think your summary misses the contribution in a fundamentally important way (as I tried to elaborate elsewhere in the thread).
I’m not sure that’s true. Markets often price things that only a minority of people know or care about. See the lithium example in the original post. That was a case where “most people didn’t know lithium was used in the H-bomb” didn’t imply that “markets didn’t know lithium was used in the H-bomb”
^This is an extremely, extremely important point!
Market prices are not a democracy. The logic for the efficiency of markets is emphatically NOT ‘wisdom of the crowds’. It’s that the most knowledgeable traders have the most to gain from trading, and so do so, and determine the price. (I have a riff on this here)
Hm, Rohin has some caveats elaborating on his claim.
Unless they were edited in after these comments were written (which doesn’t seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.
Sorry, I stand by my comment 110%.
I want to maximally push back on views like this. The economic logic for the informational efficiency of markets has nothing to do with consensus or ‘non-contrarianness’. Markets are informationally efficient because of the incentive for those who are most informed to trade.
The argument here emphatically cannot be merely summarized as “AGI soon [is] a very contrarian position [and market prices are another indication of this]”.
If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.
Yes, it’s weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren’t enough to drastically change asset prices against the counter trades of other investors.
There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they’re not going to consider it). E.g. they don’t report probabilities it would happen or make any estimates of the profits it would produce.
Rohin is right that AGI by the 2030s is a contrarian view, and that there’s likely less than $1T of investor capital that buys that view and selects investments based on it.
I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn’t overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they were swamped by partisan ‘dumb money.’ The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they’re still not enough for the most informed people to become a large share of the relevant pricing views on these assets.
1. We would welcome engagement from you regarding our argument that stock prices are not useful for forecasting timelines (the sign is ambiguous and effect noisy).
2. You offer what is effectively a full general argument against market prices ever being swayed by anything—a bit more on this point here. Price changes do not need to be driven by volume! (cf the no-trade theorem, for the conceptual idea)
3. I’m not sure if this is exactly your point about prediction markets (or if you really want to talk about total capital, on which see again #2), but:
Sovereign debt markets are orders of magnitude larger than PredictIt or other political prediction markets. These are not markets where individual traders are capped to $600 max positions and shorting is limited (or whatever the precise regulations are)! Finding easy trades in these markets is …not easy.
But the stocks are the more profitable and capital-efficient investment, so that’s where you see effects first on market prices (if much at all) for a given number of traders buying the investment thesis. That’s the main investment on this basis I see short timelines believers making (including me), and has in fact yielded a lot of excess returns since EAs started to identify it in the 2010s.
I don’t think anyone here is arguing against the no-trade theorem, and that’s not an argument that prices will never be swayed by anything, but that you can have a sizable amount of money invested on the AGI thesis before it sways prices. Yes, price changes don’t need to be driven by volume if no one wants to trade against. But plenty of traders not buying AGI would trade against AGI-driven valuations, e.g. against the high P/E ratios that would ensue. Rohin is saying not that the majority of investment capital that doesn’t buy AGI will sit on the sidelines but will trade against the AGI-driven bet, e.g. by selling assets at elevated P/E ratios. At the moment there is enough $ trading against AGI bets that market prices are not in line with the AGI bet valuations. I recognize that means the outside view EMH heuristic of going with the side trading more $ favors no AGI, but I think based on the object level that the contrarian view here is right.
It’s just a simple illustration that you can have correct minorities that have not yet been able to grow by profit or imitation to correct prices. And the election mispricings also occurred in uncapped crypto prediction markets (although the hassle of executing very quickly there surely deterred or delayed institutional investors), which is how some made hundreds of thousands or millions of dollars there.
Can you describe in concrete detail a possible world in which:
“AGI in 30 years” is a very contrarian position, including amongst hedge fund managers, bankers, billionaires, etc
Market prices indicate that we’ll get AGI in 30 years
It seems to me that if you were in such a situation, all of the non-contrarian hedge fund managers, bankers, billionaires would do the opposite of all of the trades that you’ve listed in this post, which would then push market prices back to rejecting “AGI in 30 years”; they have more money so their views dominate. What, concretely, prevents that from happening?
Minor (yet longwinded!) comment: FWIW, I think that:
Rohin’s comment seems useful
Stephen’s and your rebuttal also seem useful
Stephen’s and your rebuttal does seem relevant to what Rohin said even with his caveat included, rather than replying to a strawman
But the phrasing of your latest comment[1] feels to me overconfident, or somewhat like it’s aiming at rhetorical effect rather than just sharing data and inferences, or somewhat soldier-mindset-y
In particular, personally I dislike the use of “110%”, “maximally”, and maybe “emphatically”.
My intended vibe here isn’t “how dare you” or “this is a huge deal”.
I’m not at all annoyed at you for writing that way, I (think I) can understand why you did (I think you’re genuinely confident in your view and feel you already explained it, and want to indicate that?), and I think your tone in this comment is significant less important than your post itself.
But I do want to convey that I think debates and epistemics on the Forum will typically be better if people avoid adding such flourishes/absolutes/emphatic-ness in situations like this (e.g., where the writing shouldn’t be optimized for engagingness or persuasion but rather collaborative truth-seeking, and where the disagreed-with position isn’t just totally crazy/irrelevant). And I guess what I’d prefer pushing toward is a mindset of curiosity about what’s causing the disagreement and openness to one’s own view also shifting.
(I should flag that I didn’t read the post very carefully, haven’t read all the comments, and haven’t formed a stable/confident view on this topic. Also I’m currently sleep-deprived and expect my reasoning isn’t super clear unfortunately.)
I also think the comment is overconfident in substance, but that’s something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)
For the record, these were not edited in after seeing the replies. (Possibly I edited them in a few minutes after writing the comment—I do that pretty frequently—but if so it was before any of the replies were written, and very likely before any of the repliers had seen my comment.)
At times like these, I don’t really know why I bother to engage on the EA Forum, given that people seem to be incapable of engaging with the thing I wrote instead of some totally different thing in their head.
I’ll just pop back in here briefly to say that (1) I have learned a lot from your writing over the years, (2) I have to say I still cannot see how I misinterpreted your comment, and (3) I genuinely appreciate your engagement with the post, even if I think your summary misses the contribution in a fundamentally important way (as I tried to elaborate elsewhere in the thread).