That’s the TLDR that I took away from the article too.
I agree that “disentanglement” is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.
It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
I have some lingering doubts here as well. I would flesh out an objection to the ‘disentanglement’-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it’s hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.
That’s the TLDR that I took away from the article too.
I agree that “disentanglement” is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.
I have some lingering doubts here as well. I would flesh out an objection to the ‘disentanglement’-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it’s hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.