I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.