Update from Campaign for AI Safety

Link post

Hi!

I am starting to share weekly updates from the Campaign for AI Safety on this forum. Please sign up on the campaign website to receive these by email.


šŸŒ The big global news this week is that Rishi Sunak promised to look into AI existential risk. Clearly, the collective efforts of the AI safety community are bearing some fruit. Congratulations to A/ā€‹Prof Krueger and Center for AI Safety on this achievement. Now, letā€™s go all the way to a moratorium!


šŸ§‘ā€āš–ļø We have announced the judging panel for the student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D:

  1. Prof John Zeleznikowā€”Professor, Law School, Latrobe University

  2. Dr Neville Grant Rochow KCā€”Associate Professor (Adj), University of Adelaide Law School and Barrister

  3. Dr Guzyal Hillā€”Senior Lecturer, Charles Darwin University

  4. Jose-Miguel Bello Villarinoā€”Research Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, The University of Sydney

  5. Udomo Aliā€”Lawyer and researcher (AI & Law), University of Benin, and Nigerian Law School graduate

  6. Raymond Sunā€”Technology Lawyer (AI) at Herbert Smith Freehills and Organiser at the Data Science and AI Association of Australia

Thank-you to all the judges for their involvement in this project. And thank-you to Nayanika Kundu for the initial work on promoting the competition.

Please do like and share posts about the competition on LinkedIn, Facebook, Instagram, Twitter. And please donate or become a paying campaign member to help advertise in paid media.


šŸ¤“ New messaging research came out in the past week on alternative phrasing of ā€œGod-like AIā€. Previously we reported that this phrase did not resonate well, especially with older people. We revisited it and found that:

  • No semantically similar phrases improved on ā€œgodlike AIā€ simultaneously on agreeableness and concern. The spelling ā€œgodlike AIā€ outperformed ā€œGod-like AIā€.

  • The following phrases, while less concerning, are more agreeable: ā€œsuperintelligent AI speciesā€, ā€œAI that is smarter than us like weā€™re smarter than 2-year-oldsā€.

  • The following are more concerning, but less agreeable: ā€œuncontrollable AIā€, ā€œkiller AIā€.

  • Unless any better phrases are proposed, ā€œgodlike AIā€, alongside other terms, is a decent working term, especially when addressing younger audiences.

More research is underway. Please send requests if you want to know more about what the public thinks.


šŸŸ© We have launched fresh billboards. We are focusing only on San Francisco at the moment. Thank you Dee Kathuria for contributing the wording and the design formats. Please donate to help us reach more people in more cities.


šŸŖ§ PauseAI made headlines in Politico Pro for their recent protest in Brussels, raising concerns about the risks of artificial intelligence (AI). The passionate group gathered outside Microsoftā€™s lobbying office, advocating for safe AI development with a placard that read: ā€œBuild AI safely or donā€™t build AI at all.ā€


šŸ“ƒ On policy front, we have just sent our response to the UK CMAā€™s information request on foundation models. Thank you Miles Tidmarsh and Sue Anne Wong for the work on this submission.

There are three that we are going to work:

Please respond to this email if you want to contribute.


āœļø There are two parliamentary petitions under review at the parliamentary offices:


Thank you for your support! Please share this email with friends.

Nik Samoylov from Campaign for AI Safety
campaignforaisafety.org