I’m not sure this is worth a full post, especially since the original post didn’t really receive much positive feedback (or almost any feedback period). However, I was excited to discover recently that Kumu seems to handle the task of exporting from ORA fairly well, and I figured “why not make it accessible”, rather than just relying on screenshots (as I did in the original article).
To rehash the original post/pitch, I think that a system like this, could
1a) reduce the time necessary to conduct literature reviews and similar tasks in AI policy research;
1b) improve research quality by reducing the likelihood that researchers will overlook important considerations prior to publishing or that they will choose a suboptimal research topic; and
2) serve as a highly-scalable/low-oversight task for entry-level researchers (e.g., interns/students) who want to get experience in AI policy but were unsuccessful in applying to other positions (e.g., SERI) that suffer from mentorship constraints—whereas I think that this work would require very little senior researcher oversight on a per-contributor basis (perhaps like a 1 to 30 ratio, if senior researchers are even necessary at all?).
The following example screenshots from Kumu will be ugly/disorienting (as it was with ORA), as I have put minimal effort into optimizing the view, and it really is something you need to zoom in for since you otherwise cannot read the text. Without further ado, however, here is a sample of what’s on the Kumu project:
I have created an interactive/explorable version of my incomplete research graph of AI policy considerations, which is accessible here: https://kumu.io/hmdurland/ai-policy-considerations-mapping-ora-to-kumu-export-test-1#untitled-map
I’m not sure this is worth a full post, especially since the original post didn’t really receive much positive feedback (or almost any feedback period). However, I was excited to discover recently that Kumu seems to handle the task of exporting from ORA fairly well, and I figured “why not make it accessible”, rather than just relying on screenshots (as I did in the original article).
To rehash the original post/pitch, I think that a system like this, could
1a) reduce the time necessary to conduct literature reviews and similar tasks in AI policy research;
1b) improve research quality by reducing the likelihood that researchers will overlook important considerations prior to publishing or that they will choose a suboptimal research topic; and
2) serve as a highly-scalable/low-oversight task for entry-level researchers (e.g., interns/students) who want to get experience in AI policy but were unsuccessful in applying to other positions (e.g., SERI) that suffer from mentorship constraints—whereas I think that this work would require very little senior researcher oversight on a per-contributor basis (perhaps like a 1 to 30 ratio, if senior researchers are even necessary at all?).
The following example screenshots from Kumu will be ugly/disorienting (as it was with ORA), as I have put minimal effort into optimizing the view, and it really is something you need to zoom in for since you otherwise cannot read the text. Without further ado, however, here is a sample of what’s on the Kumu project: