Currently researching how involved the US government may get in the development of AGI and by what method. I try to learn from history and generalize from past cases of US government involvement in developing general-purpose technologies. (As a participant of the Pivotal Research Fellowship.)
Previously, I researched whether cost-benefit analysis used by US regulators might stop/discourage frontier AI regulations. (Supervised by John Halstead, GovAI.)
I also sometimes worry about the big-picture epistemics of EA à la “Is EA just an ideology like any other?”.
In the past, I’ve done operations and recruiting at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.
I don’t think it is clear what the “crucial step” in AGI development will look like—will it be a breakthrough in foundational science, or massive scaling, or combining existing technologies in a new way? It’s also unclear how the different stages of the reference technologies would map onto stages for AGI. I think it is reasonable to use reference cases that have a mix of different stages/‘cutoff points’ that seem to make sense for the respective innovation.
Ideally, one would find a more principled way to control for the different stages/”crucial steps” the different technologies had. Maybe one could quantify the government control for each of these stages for each technology. And assign weights to the different stages depending on how important the stages might be for AGI. But I had limited time and I think my approach is a decent approximation.