“Today, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)”
It’s not clear to me whether this is also the first announcement of the “US AI Safety Institute” as well as the Consortium as a possible separate thing, but either way, this seems like great news.
Thanks for sharing this! I’m going to use this thread as a chance to flag some other recent updates (no particular order or selection criteria — just what I’ve recently thought was notable or recently mentioned to people):
California proposes sweeping safety measure for AI — State Sen. Scott Wiener wants to require companies to run safety tests before deploying AI models. (link goes to “Politico Pro”; I only see the top half)
Here’s also Senator Scott Wiener’s Twitter thread on the topic (note the endorsements)
See also the California effect
Trump: AI ‘maybe the most dangerous thing out there’ (seems mostly focused on voting-related robocalls/deepfakes and digital currency)
Jacobin publishes an article on AI existential risk (Twitter)