I like the idea, but the data seems sketchy. For example, the notion of “government control” seems poorly applied:
you assign a 0 USG control for “airplane” but historicaly government “control” has been very high (Boeing, Lockheed, Grumman were all operating for the USG during WW2).
you assign 0 to the “decoding of the human genome” but the human genome project was initiated, largely funded, and directed by the U.S. government
Some entries are broad categories (e.g., “Nanotechnology”), while others are highly specific (e.g., “Extending the host range of a virus via rational protein design”) which makes the list feel arbitrary. Why are “Standard model of physics” on the list but not other major theories of physics (e.g. QM or relativity)? Why aren’t Neural nets on here?
I agree that the labeling of USG control is imperfect and only an approximation. I think it’s a reasonable approximation though.
Almost all of the USG control labels I used were taken from Anderson-Samways’s research. He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren’t government-affiliated at the time. It’s probably best to refer to his research if you want to verify how much to trust the labels.
You may have detailed contentions with each of these labels but you might still expect that, on average, they give a reasonable approximation of USG control. This is how I see the data.
On the list of innovations feeling arbitrary:
I share this concern but, again, I feel the list of innovations is still reasonably meaningful. As I said in the piece:
Choices regarding which stage of development and deployment to identify as “the invention” of the technology aren’t consistent. The most important scientific breakthroughs are often made some time before the first full deployment of a technology which in turn is often done before crucial hurdles to deployment at scale are overcome. This matters for the data insofar as the labeling of the invention year and the extent of USG control aren’t applied consistently to the same stage of development and deployment. This should not be detrimental, considering it’s not clear what the crucial stage of development and deployment for AGI will be either.[6] Nevertheless, it makes the data less precise and more of an approximation.
(I was trying to get at something similar as your concern about “specific versus broad” innovations. “Early stage development versus mass-scale deployment” is often pretty congruent with “specific scientific breakthrough” versus “broad set of related breakthroughs and their deployment”.)
The reason why many other important innovations are not on the list is mostly time constraints.
He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren’t government-affiliated at the time. It’s probably best to refer to his research if you want to verify how much to trust the labels.
By that token, AI won’t be government controlled either because neural networks were invented by McCulloch/Pitts/Rosenblatt with minimal government involvement. Clearly this is not the right way to think about government control of technologies.
I don’t think it is clear what the “crucial step” in AGI development will look like—will it be a breakthrough in foundational science, or massive scaling, or combining existing technologies in a new way? It’s also unclear how the different stages of the reference technologies would map onto stages for AGI. I think it is reasonable to use reference cases that have a mix of different stages/‘cutoff points’ that seem to make sense for the respective innovation.
Ideally, one would find a more principled way to control for the different stages/”crucial steps” the different technologies had. Maybe one could quantify the government control for each of these stages for each technology. And assign weights to the different stages depending on how important the stages might be for AGI. But I had limited time and I think my approach is a decent approximation.
I like the idea, but the data seems sketchy. For example, the notion of “government control” seems poorly applied:
you assign a 0 USG control for “airplane” but historicaly government “control” has been very high (Boeing, Lockheed, Grumman were all operating for the USG during WW2).
you assign 0 to the “decoding of the human genome” but the human genome project was initiated, largely funded, and directed by the U.S. government
Some entries are broad categories (e.g., “Nanotechnology”), while others are highly specific (e.g., “Extending the host range of a virus via rational protein design”) which makes the list feel arbitrary. Why are “Standard model of physics” on the list but not other major theories of physics (e.g. QM or relativity)? Why aren’t Neural nets on here?
Thank you, these are good points!
On the notion of “USG control”:
I agree that the labeling of USG control is imperfect and only an approximation. I think it’s a reasonable approximation though.
Almost all of the USG control labels I used were taken from Anderson-Samways’s research. He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren’t government-affiliated at the time. It’s probably best to refer to his research if you want to verify how much to trust the labels.
You may have detailed contentions with each of these labels but you might still expect that, on average, they give a reasonable approximation of USG control. This is how I see the data.
On the list of innovations feeling arbitrary:
I share this concern but, again, I feel the list of innovations is still reasonably meaningful. As I said in the piece:
(I was trying to get at something similar as your concern about “specific versus broad” innovations. “Early stage development versus mass-scale deployment” is often pretty congruent with “specific scientific breakthrough” versus “broad set of related breakthroughs and their deployment”.)
The reason why many other important innovations are not on the list is mostly time constraints.
By that token, AI won’t be government controlled either because neural networks were invented by McCulloch/Pitts/Rosenblatt with minimal government involvement. Clearly this is not the right way to think about government control of technologies.
I don’t think it is clear what the “crucial step” in AGI development will look like—will it be a breakthrough in foundational science, or massive scaling, or combining existing technologies in a new way? It’s also unclear how the different stages of the reference technologies would map onto stages for AGI. I think it is reasonable to use reference cases that have a mix of different stages/‘cutoff points’ that seem to make sense for the respective innovation.
Ideally, one would find a more principled way to control for the different stages/”crucial steps” the different technologies had. Maybe one could quantify the government control for each of these stages for each technology. And assign weights to the different stages depending on how important the stages might be for AGI. But I had limited time and I think my approach is a decent approximation.