One of the takeaways that Allen leaves his readers with is that “this policy signals that the Biden administration believes the hype about the transformative potential of AI and its national security implications is real.” That sentiment probably feels familiar to many readers of this forum.
To be clear, it’s good that this sentiment is possible, it is good that you mention it and consider it, and it is good that Allen mentions it and may believe it.
If Allen or you are trying to suggest that this action is even partially motivated by concern about “transformative AI” in the Holden sense (much less full-on “FOOM” sense), this seems very unlikely and probably misleading.
Approximately everyone believes “AI is the future” in some sense. For example, we easily can think of dozens of private and public projects that sophisticated management at top companies pushed for, that are “AI” or “ML” (that often turn out to be boondoggles). E.g., Zillow buying and flipping houses with algorithms.
These were claimed to be “transformative”, but this is only in a limited business sense.
This is probably closer to the meaning of “transformative” being used.
To be clear, it’s good that this sentiment is possible, it is good that you mention it and consider it, and it is good that Allen mentions it and may believe it.
If Allen or you are trying to suggest that this action is even partially motivated by concern about “transformative AI” in the Holden sense (much less full-on “FOOM” sense), this seems very unlikely and probably misleading.
Approximately everyone believes “AI is the future” in some sense. For example, we easily can think of dozens of private and public projects that sophisticated management at top companies pushed for, that are “AI” or “ML” (that often turn out to be boondoggles). E.g., Zillow buying and flipping houses with algorithms.
These were claimed to be “transformative”, but this is only in a limited business sense.
This is probably closer to the meaning of “transformative” being used.