I guess Iām just slightly confused about what economists actually think here since Iād always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I donāt know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), youāre probably wrong too. Or, more precisely, that youāre unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, Iād be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize ābubbleā, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons youāve cited for credence in near-term AGI ā investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. ā and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but itās far too long (2000 words) and Iāll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altmanās history of lying (which is why the OpenAI Board fired him) or Geoffrey Hintonās belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I donāt know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
I guess Iām just slightly confused about what economists actually think here since Iād always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I donāt know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), youāre probably wrong too. Or, more precisely, that youāre unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, Iād be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize ābubbleā, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons youāve cited for credence in near-term AGI ā investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. ā and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but itās far too long (2000 words) and Iāll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altmanās history of lying (which is why the OpenAI Board fired him) or Geoffrey Hintonās belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I donāt know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.