[Question] Are AGI labs building up important intangibles?

There’s a toy model of AI development where it’s pretty easy to jump into cutting-edge research and be successful: all you need is a decent dataset, a decent algorithm, and lots of compute. In theory, all these things are achievable with money.

In practice, I assume it’s more complicated, and the top labs today are accumulating resources that are hard to replicate: things like know-how, organizational practices, internal technical tools, and relationships with key external orgs. These things are harder to quantify, and and might not be as externally visible, but could provide a serious barrier to new entrants.

So, how much do these intangibles matter? Could new orgs easily become competitive with OpenAI/​DeepMind, if they have lots of money to throw at the problem? Which intangibles matter most for keeping early labs ahead of their competitors?

I’d love to get a take from people with relevant domain knowledge.

  • Gwern’s scaling hypothesis post mentions this dynamic, but it’s hard to tell how important he thinks it is. He says “all of this hypothetically can be replicated relatively easily (never underestimate the amount of tweaking and special sauce it takes), [but] competitors lack the most important thing,” which is belief in the scaling hypothesis. Well, Google and Microsoft have jumped into the large language models game now; I’m guessing that many orgs will follow them in the coming decade, including some with lots of money. So how much does the special sauce actually matter?

No comments.