I was also surprised by how highly the EMH post was received, for a completely different reason – the fact that markets aren’t expecting AGI in the next few decades seems unbelievably obvious, even before we look at interest rates. If markets were expecting AGI, AI stocks would presumably be (much more, at least compared to non-AI stocks) to the moon than they are now, and market analysts would presumably (at least occasionally) cite the possibility of AGI as the reason why. But we weren’t seeing any of that, and we already knew from just general observation of the zeitgeist that, until a few months ago, the prospect of AGI was overwhelmingly not taken seriously outside of a few niche sub-communities and AI labs (how to address this reality has been a consistent, well-known hurdle within the AI safety community).
So I’m a little confused at what exactly judges thought was the value provided by the post – did they previously suspect that markets were taking AGI seriously, and this post significantly updated them towards thinking markets weren’t? Maybe instead judges thought that the post was valuable for some other reason unrelated to the main claim of “either reject EMH or reject AGI in the next few decades”, in which case I’d be curious to hear about what that reason is (e.g., if the post causes OP to borrow a bunch of money, that would be interesting to know).
Granted, it’s an interesting analysis, but that seems like a different question, and many of the other entries (including both those that did and didn’t win prizes) strike me as having advanced the discourse more, at least if we’re focusing on the main claims.
[Just commenting on the part you copied]
Feels way too overconfident. Would the cultures diverge due to communication constraints? Seems likely, though also I could imagine pathways by which it wouldn’t happen significantly, such as if a singleton was already reached.
Would technological development diverge significantly, conditional on the above? Not necessarily, imho. If we don’t have a self-sufficient colony on Mars before we reach “technological maturity” (e.g., with APM and ASI), then presumably no (tech would hardly progress further at all, then).
Would tech divergence imply each world can’t truly track whatever weapons the other world had? Again, not necessarily. Perhaps one world had better tech and could just surveil the other.
Would there be a for-sure 1st strike advantage? Again, seems debatable.
Etcetera.