”And we are still just at the beginning. Some experts predict that in just a few years the world could be wholly unrecognizable from the one we live in today. That is what AI is: World-altering.
… But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether
.… But with AI, we cannot be ostriches sticking our heads in the sand
.… Second, Congress will also need to invent a new process to develop the right policies to implement our framework. AI moves so quickly and changes at a near exponential speed, and there’s such little legislative history on this issue, so a new process is called for. The traditional approach of Committee hearings play an essential role, but on their own wont suffice
.… That is why later this year, I will invite the top AI experts to come to Congress and convene a series of first-ever AI Insight Forums, for a new and unique approach to developing AI legislation.
… And of course, those algorithms represent the highest level of intellectual property for AI developers. Forcing companies to reveal their IP would be harmful, it would stifle innovation, and it would empower our adversaries to use them for ill
.…Guarding against doomsday scenarios (under a list titled “each Insight Forum will focus on the biggest issues in AI, including”)”
General thoughts:
This speech is a substantive positive update for me in terms of how seriously the US government is taking these risks. Not only is he concerned about AI x-risks, he realises that existing mechanisms are unable to cope with such a scenario. This suggests that more work should go into designing potential new mechanisms that will allow us to make progress on this problem.
I expect that comments from political leaders will play a significant role in normalising the discussion of x-risks such that people will stop dismissing them as sci-fi. I wish I had a better understanding of the exact dynamics of normalisation here, but my guess would be that the adoption of an issue by mainstream (non-edgy) politicians helps create common knowledge that an idea has achieved a certain level of respectability.
Excerpts from “Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS”
Link post
Some excerpts:
”And we are still just at the beginning. Some experts predict that in just a few years the world could be wholly unrecognizable from the one we live in today. That is what AI is: World-altering.
… But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether
.… But with AI, we cannot be ostriches sticking our heads in the sand
.… Second, Congress will also need to invent a new process to develop the right policies to implement our framework. AI moves so quickly and changes at a near exponential speed, and there’s such little legislative history on this issue, so a new process is called for. The traditional approach of Committee hearings play an essential role, but on their own wont suffice
.… That is why later this year, I will invite the top AI experts to come to Congress and convene a series of first-ever AI Insight Forums, for a new and unique approach to developing AI legislation.
… And of course, those algorithms represent the highest level of intellectual property for AI developers. Forcing companies to reveal their IP would be harmful, it would stifle innovation, and it would empower our adversaries to use them for ill
.…Guarding against doomsday scenarios (under a list titled “each Insight Forum will focus on the biggest issues in AI, including”)”
General thoughts:
This speech is a substantive positive update for me in terms of how seriously the US government is taking these risks. Not only is he concerned about AI x-risks, he realises that existing mechanisms are unable to cope with such a scenario. This suggests that more work should go into designing potential new mechanisms that will allow us to make progress on this problem.
I expect that comments from political leaders will play a significant role in normalising the discussion of x-risks such that people will stop dismissing them as sci-fi. I wish I had a better understanding of the exact dynamics of normalisation here, but my guess would be that the adoption of an issue by mainstream (non-edgy) politicians helps create common knowledge that an idea has achieved a certain level of respectability.