This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.
One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, it’s questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, “AI safety” had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.
It’s hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who don’t understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. That’s the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.
How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a “good research product” for a nascent field. If it’s good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, you’ve succeeded. That’s an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and they’re at least as bright as you are.
This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.
One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, it’s questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, “AI safety” had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.
It’s hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who don’t understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. That’s the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.
How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a “good research product” for a nascent field. If it’s good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, you’ve succeeded. That’s an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and they’re at least as bright as you are.