On approximately this topic, I also highly recommend the final section of Allan Dafoeās post AI Governance: Opportunity and Theory of Impact. I think the ideas in that final section can be applied (with some modifications) to a wide variety of domains other than AI governance.
Hereās an excerpt from that section:
Within any given topic area, what should our research activities look like so as to have the most positive impact? To answer this, we can adopt a simple two stage asset-decision model of research impact. At some point in the causal chain, impactful decisions will be made, be they by AI researchers, activists, public intellectuals, CEOs, generals, diplomats, or heads of state. We want our research activities to provide assets that will help those decisions to be made well. These assets can include: technical solutions; strategic insights; shared perception of risks; a more cooperative worldview; well-motivated and competent advisors; credibility, authority, and connections for those experts. There are different perspectives on which of these assets, and the breadth of the assets, that are worth investing in.
On the narrow end of these perspectives is what Iāll call the product model of research, which regards the value of funding research to be primarily in answering specific important questions. The product model is optimally suited for applied research with a well-defined problem. [...]
I believe the product model substantially underestimates the value of research in AI safety and, especially, AI governance; I estimate that the majority (perhaps ~80%) of the value of AI governance research comes from assets other than the narrow research product[7]. Other assets include (1) bringing diverse expertise to bear on AI governance issues; (2) otherwise improving, as a byproduct of research, AI governance researchersā competence on relevant issues; (3) bestowing intellectual authority and prestige to individuals who have thoughtful perspectives on long term risks from AI; (4) growing the field by expanding the researcher network, access to relevant talent pools, improved career-pipelines, and absorptive capacity for junior talent; and (5) screening, training, credentialing, and placing junior researchers. Letās call this broader perspective the field building model of research, since the majority of value from supporting research occurs from the ways it grows the field of people who care about long term AI governance issues, and improves insight, expertise, connections, and authority within that field.
Ironically, though, to achieve this it may still be best for most people to focus on producing good research products.
On approximately this topic, I also highly recommend the final section of Allan Dafoeās post AI Governance: Opportunity and Theory of Impact. I think the ideas in that final section can be applied (with some modifications) to a wide variety of domains other than AI governance.
Hereās an excerpt from that section: