I am starting to share weekly updates from the Campaign for AI Safety on this forum. Please sign up on the campaign website to receive these by email.
š The big global news this week is that Rishi Sunak promised to look into AI existential risk. Clearly, the collective efforts of the AI safety community are bearing some fruit. Congratulations to A/āProf Krueger and Center for AI Safety on this achievement. Now, letās go all the way to a moratorium!
š¤ New messaging research came out in the past week on alternative phrasing of āGod-like AIā. Previously we reported that this phrase did not resonate well, especially with older people. We revisited it and found that:
No semantically similar phrases improved on āgodlike AIā simultaneously on agreeableness and concern. The spelling āgodlike AIā outperformed āGod-like AIā.
The following phrases, while less concerning, are more agreeable: āsuperintelligent AI speciesā, āAI that is smarter than us like weāre smarter than 2-year-oldsā.
The following are more concerning, but less agreeable: āuncontrollable AIā, ākiller AIā.
Unless any better phrases are proposed, āgodlike AIā, alongside other terms, is a decent working term, especially when addressing younger audiences.
More research is underway. Please send requests if you want to know more about what the public thinks.
šŖ§ PauseAI made headlines in Politico Pro for their recent protest in Brussels, raising concerns about the risks of artificial intelligence (AI). The passionate group gathered outside Microsoftās lobbying office, advocating for safe AI development with a placard that read: āBuild AI safely or donāt build AI at all.ā
Update from Campaign for AI Safety
Link post
Hi!
I am starting to share weekly updates from the Campaign for AI Safety on this forum. Please sign up on the campaign website to receive these by email.
š The big global news this week is that Rishi Sunak promised to look into AI existential risk. Clearly, the collective efforts of the AI safety community are bearing some fruit. Congratulations to A/āProf Krueger and Center for AI Safety on this achievement. Now, letās go all the way to a moratorium!
š§āāļø We have announced the judging panel for the student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D:
Prof John ZeleznikowāProfessor, Law School, Latrobe University
Dr Neville Grant Rochow KCāAssociate Professor (Adj), University of Adelaide Law School and Barrister
Dr Guzyal HillāSenior Lecturer, Charles Darwin University
Jose-Miguel Bello VillarinoāResearch Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, The University of Sydney
Udomo AliāLawyer and researcher (AI & Law), University of Benin, and Nigerian Law School graduate
Raymond SunāTechnology Lawyer (AI) at Herbert Smith Freehills and Organiser at the Data Science and AI Association of Australia
Thank-you to all the judges for their involvement in this project. And thank-you to Nayanika Kundu for the initial work on promoting the competition.
Please do like and share posts about the competition on LinkedIn, Facebook, Instagram, Twitter. And please donate or become a paying campaign member to help advertise in paid media.
š¤ New messaging research came out in the past week on alternative phrasing of āGod-like AIā. Previously we reported that this phrase did not resonate well, especially with older people. We revisited it and found that:
No semantically similar phrases improved on āgodlike AIā simultaneously on agreeableness and concern. The spelling āgodlike AIā outperformed āGod-like AIā.
The following phrases, while less concerning, are more agreeable: āsuperintelligent AI speciesā, āAI that is smarter than us like weāre smarter than 2-year-oldsā.
The following are more concerning, but less agreeable: āuncontrollable AIā, ākiller AIā.
Unless any better phrases are proposed, āgodlike AIā, alongside other terms, is a decent working term, especially when addressing younger audiences.
More research is underway. Please send requests if you want to know more about what the public thinks.
š© We have launched fresh billboards. We are focusing only on San Francisco at the moment. Thank you Dee Kathuria for contributing the wording and the design formats. Please donate to help us reach more people in more cities.
šŖ§ PauseAI made headlines in Politico Pro for their recent protest in Brussels, raising concerns about the risks of artificial intelligence (AI). The passionate group gathered outside Microsoftās lobbying office, advocating for safe AI development with a placard that read: āBuild AI safely or donāt build AI at all.ā
š On policy front, we have just sent our response to the UK CMAās information request on foundation models. Thank you Miles Tidmarsh and Sue Anne Wong for the work on this submission.
There are three that we are going to work:
Request for comment launched by the National Telecommunications and Information Administration (USA. Due 12 June)
Policy paper āAI regulation: a pro-innovation approachā (UK. Due 21 June 2023)
Supporting responsible AI: discussion paper (Australia. Due 26 July)
Please respond to this email if you want to contribute.
āļø There are two parliamentary petitions under review at the parliamentary offices:
Greg Colbournās petition (UK).
My petition (Australia).
Thank you for your support! Please share this email with friends.
Nik Samoylov from Campaign for AI Safety
campaignforaisafety.org