Without weighing in on your perspective/āposition here, Iād like to share a section of Allan Dafoeās post AI Governance: Opportunity and Theory of Impact that you/āsome readers may find interesting:
Within any given topic area, what should our research activities look like so as to have the most positive impact? To answer this, we can adopt a simple two stage asset-decision model of research impact. At some point in the causal chain, impactful decisions will be made, be they by AI researchers, activists, public intellectuals, CEOs, generals, diplomats, or heads of state. We want our research activities to provide assets that will help those decisions to be made well. These assets can include: technical solutions; strategic insights; shared perception of risks; a more cooperative worldview; well-motivated and competent advisors; credibility, authority, and connections for those experts. There are different perspectives on which of these assets, and the breadth of the assets, that are worth investing in.
On the narrow end of these perspectives is what Iāll call the product model of research, which regards the value of funding research to be primarily in answering specific important questions. The product model is optimally suited for applied research with a well-defined problem. [...]
I believe the product model substantially underestimates the value of research in AI safety and, especially, AI governance; I estimate that the majority (perhaps ~80%) of the value of AI governance research comes from assets other than the narrow research product[7]. Other assets include (1) bringing diverse expertise to bear on AI governance issues; (2) otherwise improving, as a byproduct of research, AI governance researchersā competence on relevant issues; (3) bestowing intellectual authority and prestige to individuals who have thoughtful perspectives on long term risks from AI; (4) growing the field by expanding the researcher network, access to relevant talent pools, improved career-pipelines, and absorptive capacity for junior talent; and (5) screening, training, credentialing, and placing junior researchers. Letās call this broader perspective the field building model of research, since the majority of value from supporting research occurs from the ways it grows the field of people who care about long term AI governance issues, and improves insight, expertise, connections, and authority within that field.
Ironically, though, to achieve this it may still be best for most people to focus on producing good research products.
This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.
One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, itās questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, āAI safetyā had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.
Itās hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who donāt understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. Thatās the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.
How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a āgood research productā for a nascent field. If itās good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, youāve succeeded. Thatās an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and theyāre at least as bright as you are.
Without weighing in on your perspective/āposition here, Iād like to share a section of Allan Dafoeās post AI Governance: Opportunity and Theory of Impact that you/āsome readers may find interesting:
This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.
One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, itās questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, āAI safetyā had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.
Itās hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who donāt understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. Thatās the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.
How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a āgood research productā for a nascent field. If itās good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, youāve succeeded. Thatās an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and theyāre at least as bright as you are.