I agree with the criticism. The quotes provided aren’t good evidence that she is personally concerned about x-risk. We just don’t have much information about her views on catastrophic risks. I’ve updated the text to reflect this and tried to encompass more of what Trump has said about AI as well. Also edited a few other parts of the piece.
I’ve pasted the new text for Harris below:
Harris tends to focus on present harms, but has expressed some concern about existential risk.
Harris has generally put more emphasis on current harms, highlighting that local/personal harms feel existential to individuals (and implicitly deprioritizing globally existential threats posed by AI) in a November 2023 speech. That said, in the same speech, she acknowledged that AI might “endanger the very existence of humanity”, citing “AI-formulated bioweapons” and “AI-enabled cyberattacks” as particular concerns.
In general, it seems reasonable to expect that Harris will at least not reverse the Biden-Harris administration’s previous actions on AI safety. The Biden administration has made impressive progress on AI safety policy, including the establishment of the US AI Safety Institute, securing voluntary commitments on AI safety from many companies, and the 2023 AI Executive Order.
The Vice President’s trip to the United Kingdom builds on her long record of leadership to confront the challenges and seize the opportunities of advanced technology. In May, she convened the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitments from 15 leading AI companies to help move toward safe, secure, and transparent development of AI technology. In July, the Vice President convened consumer protection, labor, and civil rights leaders to discuss the risks related to AI and to underscore that it is a false choice to suggest America can either advance innovation or protect consumers’ rights.
As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.
The United States AI Safety Institute: The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. …
I agree with the criticism. The quotes provided aren’t good evidence that she is personally concerned about x-risk. We just don’t have much information about her views on catastrophic risks. I’ve updated the text to reflect this and tried to encompass more of what Trump has said about AI as well. Also edited a few other parts of the piece.
I’ve pasted the new text for Harris below:
Harris tends to focus on present harms, but has expressed some concern about existential risk.
Harris has generally put more emphasis on current harms, highlighting that local/personal harms feel existential to individuals (and implicitly deprioritizing globally existential threats posed by AI) in a November 2023 speech. That said, in the same speech, she acknowledged that AI might “endanger the very existence of humanity”, citing “AI-formulated bioweapons” and “AI-enabled cyberattacks” as particular concerns. In general, it seems reasonable to expect that Harris will at least not reverse the Biden-Harris administration’s previous actions on AI safety. The Biden administration has made impressive progress on AI safety policy, including the establishment of the US AI Safety Institute, securing voluntary commitments on AI safety from many companies, and the 2023 AI Executive Order.
Harris was the one personally behind the voluntary AI safety commitments of July 2023. Here’s a press release from the White House:
See also Foreign Policy’s piece Kamala Harris’s Record as the Biden Administration’s AI Czar