In a Nov 2023 speech Harris mentioned she’s concerned about x-risk and risks from cyber & bio. She has generally put more emphasis on current harms but so far without dismissing the longer-term threats.
This seems like a very generous interpretation of her speech to me. I feel like you are seeing what you want to see.
For context, this was a speech given when she came to the UK for the AI Safety Summit, which was explicitly about existential safety. She didn’t really have a choice but to mention them unless she wanted to give a major snub to an important US ally, so she did:
But just as AI has the potential to do profound good, it also has the potential to cause profound harm. From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the “existential threats of AI” because, of course, they could endanger the very existence of humanity. (Pause)
These threats, without question, are profound, and they demand global action.
… and that’s it. That’s all she said about existential risks. She then immediately derails the conversation by offering a series of non-sequiturs:
But let us be clear. There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.
Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?
When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?
When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?
I think it’s pretty clear that these are not the sorts of things you say if you are actually concerned about existential risks. No-one genuinely motivated by fear of the deaths of every human on earth, and all future generation, goes around saying “oh yeah and a single person’s health insurance admin problems, that is basically the same thing”.
I won’t quote the speech in full, but I think it is worth looking at. She repeatedly returns to potential harms of AI, but never—once the bare necessities of diplomatic politeness have been met—does she bother to return to catastrophic risks. Instead we have:
… make sure that the benefits of AI are shared equitably and to address predictable threats, including deep fakes, data privacy violations, and algorithmic discrimination.
and
… establish a national safety reporting program on the unsafe use of AI in hospitals and medical facilities. Tech companies will create new tools to help consumers discern if audio and visual content is AI-generated. And AI developers will be required to submit the results of AI safety testing to the United States government for review.
and
… protect workers’ rights, advanced transparency, prevent discrimination, drive innovation in the public interest, and help build international rules and norms for the responsible use of AI.
and
the wellbeing of their customers, the safety of our communities, and the stability of our democracies.
and
… the principles of privacy, transparency, accountability, and consumer protection.
My interpretation here, that she is basically rejecting AI safety, is not unusual. You can see for example Politico here calling it a ‘rebuke’ to Sunak and the focus on existential risks, and making clear that it was very deliberate.
Overall this actually makes me more pessimistic about Kamala. You clearly wrote this post in a soldier mind and looked for the best evidence you could find to show that Kamala cared about existential risks, so if this speech, which I think basically suggests the opposite, is the best you could find then that seems like a pretty big negative update. In particular it seems worse than Trump, who gave a fairly clear explanation of one casual risk pathway—deepfakes causing a war—and he did this without being explicitly asked about existential risks and without a teleprompter. Are there any examples of Kamala, unprompted, bringing up in an interview the risk of AI causing a nuclear war, or taking over the human race?
I agree with your point that the record of the Biden Administration seems fairly good here, and she might continue out of status quo bias, continuity of staff, and so on. But in terms of her specific views she seems significantly less well aligned than Biden or Rishi were, and maybe less than Trump.
I agree with the criticism. The quotes provided aren’t good evidence that she is personally concerned about x-risk. We just don’t have much information about her views on catastrophic risks. I’ve updated the text to reflect this and tried to encompass more of what Trump has said about AI as well. Also edited a few other parts of the piece.
I’ve pasted the new text for Harris below:
Harris tends to focus on present harms, but has expressed some concern about existential risk.
Harris has generally put more emphasis on current harms, highlighting that local/personal harms feel existential to individuals (and implicitly deprioritizing globally existential threats posed by AI) in a November 2023 speech. That said, in the same speech, she acknowledged that AI might “endanger the very existence of humanity”, citing “AI-formulated bioweapons” and “AI-enabled cyberattacks” as particular concerns.
In general, it seems reasonable to expect that Harris will at least not reverse the Biden-Harris administration’s previous actions on AI safety. The Biden administration has made impressive progress on AI safety policy, including the establishment of the US AI Safety Institute, securing voluntary commitments on AI safety from many companies, and the 2023 AI Executive Order.
The Vice President’s trip to the United Kingdom builds on her long record of leadership to confront the challenges and seize the opportunities of advanced technology. In May, she convened the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitments from 15 leading AI companies to help move toward safe, secure, and transparent development of AI technology. In July, the Vice President convened consumer protection, labor, and civil rights leaders to discuss the risks related to AI and to underscore that it is a false choice to suggest America can either advance innovation or protect consumers’ rights.
As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.
The United States AI Safety Institute: The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. …
You have a point, I think you’re right that we cannot be sure what Harris’s beliefs about AI and AI Safety truly are deep down. I myself am skeptical she deeply believes AI is a true existential risk. However, her personal views matter less than one might think. Politicians are constantly triangulating between their various political needs (their constituents, donors, domestic political allies, international allies, etc) and what they think is the best policy. Personal views often matter less than you might think, and typically only do so only on the margin.
When public officials issue statements on policy, this is the narrow window we get into their political views, and what we think they’ll do. This is how the world of politics and policy works. For example, the US government listens when Chinese officials make diplomatic statements on various issues at the UN or elsewhere. Voters listen to the campaign’s message. Politicians do lie and break promises, but they do so at some political cost. Actions speak much louder than words, but when it comes to the future, words are all we have.
Yes, she spoke at the AI Safety Summit, but she chose to speak there. She could have spoken at any number of events on other topics, whether it be trade, security, climate change, etc. The choice of venue demonstrates her (and the US’s) commitment to that issue. Additionally, she could have not mentioned existential risk, and I agree it would’ve been weird, but hardly an international snub.
I agree with you that I think the quote is pretty weak evidence. And her focus on other AI issues outside of existence risk is sub-optimal, but ultimately I’m favor of regulating other issues like AI discrimination and AI bias, even if I think it’s substantially less important. And is it really a negative? If she’s really pro regulation on ‘near-term’ AI issues like AI bias, on the margin wouldn’t that push her to be pro-regulation on AI? I do think it’s mostly irrelevant.
I think the much stronger evidence to support Harris on the basis of AI policy is Biden’s record on the issue. Unless explicitly stated otherwise, I think that most Dem presidencies will continue policy making in a similar direction as previous Dem admins. I also think that we can trust a Dem admin to defer to experts on policy making
I also think the fact that the RNC platform explicitly states to roll back Biden’s EO as strong evidence. See below:
We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing. (link)
Am I worried that Harris will cease to continue Biden approach? Yes. But I think the evidence is pretty clear that Harris is better than Trump on AI policy.
This seems like a very generous interpretation of her speech to me. I feel like you are seeing what you want to see.
For context, this was a speech given when she came to the UK for the AI Safety Summit, which was explicitly about existential safety. She didn’t really have a choice but to mention them unless she wanted to give a major snub to an important US ally, so she did:
… and that’s it. That’s all she said about existential risks. She then immediately derails the conversation by offering a series of non-sequiturs:
I think it’s pretty clear that these are not the sorts of things you say if you are actually concerned about existential risks. No-one genuinely motivated by fear of the deaths of every human on earth, and all future generation, goes around saying “oh yeah and a single person’s health insurance admin problems, that is basically the same thing”.
I won’t quote the speech in full, but I think it is worth looking at. She repeatedly returns to potential harms of AI, but never—once the bare necessities of diplomatic politeness have been met—does she bother to return to catastrophic risks. Instead we have:
and
and
and
and
My interpretation here, that she is basically rejecting AI safety, is not unusual. You can see for example Politico here calling it a ‘rebuke’ to Sunak and the focus on existential risks, and making clear that it was very deliberate.
Overall this actually makes me more pessimistic about Kamala. You clearly wrote this post in a soldier mind and looked for the best evidence you could find to show that Kamala cared about existential risks, so if this speech, which I think basically suggests the opposite, is the best you could find then that seems like a pretty big negative update. In particular it seems worse than Trump, who gave a fairly clear explanation of one casual risk pathway—deepfakes causing a war—and he did this without being explicitly asked about existential risks and without a teleprompter. Are there any examples of Kamala, unprompted, bringing up in an interview the risk of AI causing a nuclear war, or taking over the human race?
I agree with your point that the record of the Biden Administration seems fairly good here, and she might continue out of status quo bias, continuity of staff, and so on. But in terms of her specific views she seems significantly less well aligned than Biden or Rishi were, and maybe less than Trump.
(I previously wrote about this here)
I agree with the criticism. The quotes provided aren’t good evidence that she is personally concerned about x-risk. We just don’t have much information about her views on catastrophic risks. I’ve updated the text to reflect this and tried to encompass more of what Trump has said about AI as well. Also edited a few other parts of the piece.
I’ve pasted the new text for Harris below:
Harris tends to focus on present harms, but has expressed some concern about existential risk.
Harris has generally put more emphasis on current harms, highlighting that local/personal harms feel existential to individuals (and implicitly deprioritizing globally existential threats posed by AI) in a November 2023 speech. That said, in the same speech, she acknowledged that AI might “endanger the very existence of humanity”, citing “AI-formulated bioweapons” and “AI-enabled cyberattacks” as particular concerns. In general, it seems reasonable to expect that Harris will at least not reverse the Biden-Harris administration’s previous actions on AI safety. The Biden administration has made impressive progress on AI safety policy, including the establishment of the US AI Safety Institute, securing voluntary commitments on AI safety from many companies, and the 2023 AI Executive Order.
Harris was the one personally behind the voluntary AI safety commitments of July 2023. Here’s a press release from the White House:
See also Foreign Policy’s piece Kamala Harris’s Record as the Biden Administration’s AI Czar
You have a point, I think you’re right that we cannot be sure what Harris’s beliefs about AI and AI Safety truly are deep down. I myself am skeptical she deeply believes AI is a true existential risk. However, her personal views matter less than one might think. Politicians are constantly triangulating between their various political needs (their constituents, donors, domestic political allies, international allies, etc) and what they think is the best policy. Personal views often matter less than you might think, and typically only do so only on the margin.
When public officials issue statements on policy, this is the narrow window we get into their political views, and what we think they’ll do. This is how the world of politics and policy works. For example, the US government listens when Chinese officials make diplomatic statements on various issues at the UN or elsewhere. Voters listen to the campaign’s message. Politicians do lie and break promises, but they do so at some political cost. Actions speak much louder than words, but when it comes to the future, words are all we have.
Yes, she spoke at the AI Safety Summit, but she chose to speak there. She could have spoken at any number of events on other topics, whether it be trade, security, climate change, etc. The choice of venue demonstrates her (and the US’s) commitment to that issue. Additionally, she could have not mentioned existential risk, and I agree it would’ve been weird, but hardly an international snub.
I agree with you that I think the quote is pretty weak evidence. And her focus on other AI issues outside of existence risk is sub-optimal, but ultimately I’m favor of regulating other issues like AI discrimination and AI bias, even if I think it’s substantially less important. And is it really a negative? If she’s really pro regulation on ‘near-term’ AI issues like AI bias, on the margin wouldn’t that push her to be pro-regulation on AI? I do think it’s mostly irrelevant.
I think the much stronger evidence to support Harris on the basis of AI policy is Biden’s record on the issue. Unless explicitly stated otherwise, I think that most Dem presidencies will continue policy making in a similar direction as previous Dem admins. I also think that we can trust a Dem admin to defer to experts on policy making
I also think the fact that the RNC platform explicitly states to roll back Biden’s EO as strong evidence. See below:
Am I worried that Harris will cease to continue Biden approach? Yes. But I think the evidence is pretty clear that Harris is better than Trump on AI policy.