AI policy is important, but we don’t really know where to begin at the object level
You can potentially do 1 of 3 things, ATM:
A. “disentanglement” research:
B. operational support for (e.g.) FHI
C. get in position to influence policy, and wait for policy objectives to be cleared up
Get in touch / Apply to FHI!
I think this is broadly correct, but have a lot of questions and quibbles.
I found “disentanglement” unclear. [14] gave the clearest idea of what this might look like. A simple toy example would help a lot.
Can you give some idea of what an operations role looks like? I find it difficult to visualize, and I think uncertainty makes it less appealling.
Do you have any thoughts on why operations roles aren’t being filled?
One more policy that seems worth starting on: programs that build international connections between researchers (especially around policy-relevant issues of AI (i.e. ethics/safety)).
The timelines for effective interventions in some policy areas may be short (e.g. 1-5 years), and it may not be possible to wait for disentanglement to be “finished”.
Is it reasonable to expect the “disentanglement bottleneck” to be cleared at all? Would disentanglement actually make policy goals clear enough? Trying to anticipate all the potential pitfalls of policies is a bit like trying to anticipate all the potential pitfalls of a particular AI design or reward specification… fortunately, there is a bit of a disanalogy in that we are more likely to have a chance to correct mistakes with policy (although that still could be very hard/impossible). It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
That’s the TLDR that I took away from the article too.
I agree that “disentanglement” is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.
It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
I have some lingering doubts here as well. I would flesh out an objection to the ‘disentanglement’-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it’s hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.
Thanks for writing this. My TL;DR is:
AI policy is important, but we don’t really know where to begin at the object level
You can potentially do 1 of 3 things, ATM: A. “disentanglement” research: B. operational support for (e.g.) FHI C. get in position to influence policy, and wait for policy objectives to be cleared up
Get in touch / Apply to FHI!
I think this is broadly correct, but have a lot of questions and quibbles.
I found “disentanglement” unclear. [14] gave the clearest idea of what this might look like. A simple toy example would help a lot.
Can you give some idea of what an operations role looks like? I find it difficult to visualize, and I think uncertainty makes it less appealling.
Do you have any thoughts on why operations roles aren’t being filled?
One more policy that seems worth starting on: programs that build international connections between researchers (especially around policy-relevant issues of AI (i.e. ethics/safety)).
The timelines for effective interventions in some policy areas may be short (e.g. 1-5 years), and it may not be possible to wait for disentanglement to be “finished”.
Is it reasonable to expect the “disentanglement bottleneck” to be cleared at all? Would disentanglement actually make policy goals clear enough? Trying to anticipate all the potential pitfalls of policies is a bit like trying to anticipate all the potential pitfalls of a particular AI design or reward specification… fortunately, there is a bit of a disanalogy in that we are more likely to have a chance to correct mistakes with policy (although that still could be very hard/impossible). It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
That’s the TLDR that I took away from the article too.
I agree that “disentanglement” is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.
I have some lingering doubts here as well. I would flesh out an objection to the ‘disentanglement’-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it’s hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.