Thanks for your thoughts on this. I found this really helpful and I think 80′000 hours could maybe consider linking to it on the AI policy guide.
Disentanglement research feels like a valid concept, and it’s great to see it exposed here. But given how much weight pivots on the idea and how much uncertainty surrounds identifying these skills, it seems like disentanglement research is a subject that is itself asking for further disentanglement! Perhaps it could be a trial question for any prospective disentanglers out there.
You’ve given examples of some entangled and under-defined questions in AI policy and provided the example of Bostrom as exhibiting strong disentanglement skills; Ben has detailed an example of an AI strategy question that seems to require some sort of “detangling” skill; Jade has given an illuminative abstract picture. These are each very helpful. But so far, the examples are either exclusively AI strategy related or entirely abstract. The process of identifying the general attributes of good disentanglers and disentanglement research might be assisted by providing a broader range of examples to include instances of disentanglement research outside of the field of AI strategy. Both answered and unanswered research questions of this sort might be useful. (I admit to being unable to think of any good examples right now)
Moving away from disentanglement, I’ve been interested for some time by your fourth, tentative suggestion for existing policy-type recommendations to
fund joint intergovernmental research projects located in relatively geopolitically neutral countries with open membership and a strong commitment to a common good principle.
This is a subject that I haven’t been able to find much written material on—if you’re aware of any I’d be very interested to know about it. It isn’t completely clear whether or how to push for an idea like this. Additionally, based on the lack of literature, it feels like this hasn’t received as much thought as it should, even in an exploratory sense (but being outside of a strategy research cluster, I could be wrong on this). You mention that race dynamics are easier to start than stop, meanwhile early intergovernmental initiatives are one of the few tools that can plausibly prevent/slow/stop international races of this sort. These lead me to believe that this ‘recommendation’ is actually more of a high priority research area. Exploring this area appears robustly positive in expectation. I’d be interested to hear other perspectives on this subject and to know whether any groups or individuals are currently working/thinking about it, and if not, how research on it might best be started, if indeed it should be.
For five years, my favorite subject to read about was talent. Unlike developmental psychologists, I did not spend most of my learning time on learning disabilities. I also did a lot of intuition calibration which helps me detect various neurological differences in people. Thus, I have a rare area of knowledge and an unusual skill which may be useful for assisting with figuring out what types of people have a particular kind of potential, what they’re like, what’s correlated with their talent(s), what they might need, and how to find and identify them. If any fellow EAs can put this to use, feel free to message me.
Hi Carrick,
Thanks for your thoughts on this. I found this really helpful and I think 80′000 hours could maybe consider linking to it on the AI policy guide.
Disentanglement research feels like a valid concept, and it’s great to see it exposed here. But given how much weight pivots on the idea and how much uncertainty surrounds identifying these skills, it seems like disentanglement research is a subject that is itself asking for further disentanglement! Perhaps it could be a trial question for any prospective disentanglers out there.
You’ve given examples of some entangled and under-defined questions in AI policy and provided the example of Bostrom as exhibiting strong disentanglement skills; Ben has detailed an example of an AI strategy question that seems to require some sort of “detangling” skill; Jade has given an illuminative abstract picture. These are each very helpful. But so far, the examples are either exclusively AI strategy related or entirely abstract. The process of identifying the general attributes of good disentanglers and disentanglement research might be assisted by providing a broader range of examples to include instances of disentanglement research outside of the field of AI strategy. Both answered and unanswered research questions of this sort might be useful. (I admit to being unable to think of any good examples right now)
Moving away from disentanglement, I’ve been interested for some time by your fourth, tentative suggestion for existing policy-type recommendations to
This is a subject that I haven’t been able to find much written material on—if you’re aware of any I’d be very interested to know about it. It isn’t completely clear whether or how to push for an idea like this. Additionally, based on the lack of literature, it feels like this hasn’t received as much thought as it should, even in an exploratory sense (but being outside of a strategy research cluster, I could be wrong on this). You mention that race dynamics are easier to start than stop, meanwhile early intergovernmental initiatives are one of the few tools that can plausibly prevent/slow/stop international races of this sort. These lead me to believe that this ‘recommendation’ is actually more of a high priority research area. Exploring this area appears robustly positive in expectation. I’d be interested to hear other perspectives on this subject and to know whether any groups or individuals are currently working/thinking about it, and if not, how research on it might best be started, if indeed it should be.
For five years, my favorite subject to read about was talent. Unlike developmental psychologists, I did not spend most of my learning time on learning disabilities. I also did a lot of intuition calibration which helps me detect various neurological differences in people. Thus, I have a rare area of knowledge and an unusual skill which may be useful for assisting with figuring out what types of people have a particular kind of potential, what they’re like, what’s correlated with their talent(s), what they might need, and how to find and identify them. If any fellow EAs can put this to use, feel free to message me.