Updates from Campaign for AI Safety

Link post

Hi!

๐Ÿ“ฐ AI Regulation Progress

๐ŸŒ Californiaโ€™s SB 294, the Safety in AI Act, introduced by Senator Scott Wiener on September 13, aims to enhance AI safety. The bill outlines provisions for responsible scaling, liability for safety risks, CalCompute, and KYC policies.

๐Ÿ‡ฌ๐Ÿ‡ง UKโ€™s Frontier AI Taskforce unveiled its expert panel, including luminaries like David Krueger and Yoshua Bengio.

๐Ÿ‡ช๐Ÿ‡บ Ursula von der Leyen, in her State of the European Union (SOTEU) speech, mentioned the need for AI regulation, especially safety.

๐Ÿ‡บ๐Ÿ‡ธ Senators Richard Blumenthal and Josh Hawley announced a bipartisan AI regulation framework in the US, introducing AI training licensing.

These mark a profound shift in AI policy in just six months and are a testament to tireless advocacy and visionary policymaking. Unfortunately, none of the current legislative proposals would prevent or delay development of super-intelligent AI:

Policy scorecards by Daniel Colson /โ€‹ AI Policy Institute

๐Ÿ”ฌ USA AI x-risk perception tracker

๐Ÿ“Š The second wave, conducted from August 27 to 28, 2023, showed x-risk perception remained steady, more people in the USA seem to agree with incorrigibility of advanced AI and short AGI timelines:


๐Ÿ“ฃ International #PauseAI protests on 21 October 2023

๐ŸŒ On 21 October, join #PauseAI protests across the globe. From San Francisco to London, Jerusalem to Brussels, and more, we unite to address the rapid rise of AI power. Our message is clear: itโ€™s time for leaders to take AI risks seriously.

๐Ÿ—“๏ธ October 21st (Saturday), in multiple countries
๐Ÿ‡บ๐Ÿ‡ธ US, California, San Francisco (Sign up)
๐Ÿ‡ฌ๐Ÿ‡ง UK, Parliament Square, London (Sign up, Facebook)
๐Ÿ‡ฎ๐Ÿ‡ฑ Israel, Jerusalem (Sign up)
๐Ÿ‡ง๐Ÿ‡ช Belgium, Brussels (Sign up)
๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands, Den Haag (Sign up)
๐Ÿ‡ฎ๐Ÿ‡น Italy (Sign up)
๐Ÿ‡ฉ๐Ÿ‡ช Germany (Sign up)
๐ŸŒŽ Your country here? Discuss on Discord!


๐Ÿ“ฃProtest against irreversible proliferation of model weight at Meta HQ

Stand with Holly Elmore for AI safety! Metaโ€™s open AI model weights risk our safety!

๐Ÿ—“๏ธ Protest: 29 September 2023, 4:00 PM PDT
๐Ÿ“ Location: 250 Howard St, outside Meta Office Building, San Francisco


๐Ÿ“ƒ Policy updates

On the policy front, we have made our submission to the Canadian Guardrails for Generative AI โ€“ Code of Practice by Innovation, Science and Economic Development Canada.

Next, we are working on the following:

Do you know of other inquiries? Please let us know. You may respond to this email if you want to contribute to the upcoming consultation papers.


๐Ÿ“œPetition updates

๐Ÿ‡ฌ๐Ÿ‡ง For our supporters in the UK, thereโ€™s an ongoing petition led by Greg Colbourn. This petition urges the global community to consider a worldwide moratorium on AI technology development due to human extinction risks. As of now, the petition has garnered 48 signatures in support of this crucial cause.


Campaign media coverage

The Roy Morgan research into Australiansโ€™ attitudes regarding AI and x-risk was covered in ACS Information Age, B&T, Cryptopolitan, Startup Daily, Womenโ€™s Agenda, and mentioned on Sky News.

InDaily (South Australia) wrote about the recent South Australian consultation with focus on the usage of AI tools in the public sector.


Thank you for your support! Please donate to the campaign to help us fund ads in London ahead of the UK AI summit. Please share this email with friends.

Campaign for AI Safety
campaignforaisafety.org

No comments.