Posting something about a current issue that I think many people here would be interested in.
California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. We need your help to encourage the Governor to sign it! You can help by writing a quick custom letter and sending it to his office (see instructions below).
About SB 1047 and why it is important
SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies.
I believe SB 1047 is the most significant piece of AI safety legislation in the country, and perhaps the world. While AI policy has made great strides in the last couple of years, AI policies have mostly not had teeth – they have relied on government reporting requirements and purely voluntary promises from AI developers to behave responsibly. SB 1047 would actually prohibit behavior that exposes the public to serious and unreasonable risks, and incentivize AI developers to consider the public interest when developing and releasing powerful models.
If 1047 is vetoed, it’s plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon.
In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context – instead, focus on how the risks are serious and how this bill would help keep the public safe.
Once you’ve written your own custom letter, think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don’t typically engage on tech issues. Then help them write it! You can:
Call or text them and tell them about the bill and ask them if they’d be willing to support it.
Draft a custom letter based on what you know about them and what they told you.
Send them a completed letter as a PDF and ask if they’re willing to send it to the Governor’s office using the instructions above.
Organize an event! To make even more of an impact, you should consider hosting an event to organize people to write letters. Please email thomas@safe.ai if you are interested in hosting an event.
How to help crucial AI safety legislation pass with 10 minutes of effort
Posting something about a current issue that I think many people here would be interested in.
California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. We need your help to encourage the Governor to sign it! You can help by writing a quick custom letter and sending it to his office (see instructions below).
About SB 1047 and why it is important
SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies.
I believe SB 1047 is the most significant piece of AI safety legislation in the country, and perhaps the world. While AI policy has made great strides in the last couple of years, AI policies have mostly not had teeth – they have relied on government reporting requirements and purely voluntary promises from AI developers to behave responsibly. SB 1047 would actually prohibit behavior that exposes the public to serious and unreasonable risks, and incentivize AI developers to consider the public interest when developing and releasing powerful models.
If 1047 is vetoed, it’s plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon.
The bill’s text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate between a bill proponent and opponent is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said “we believe its benefits likely outweigh its costs.”
SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto. A veto decision would set back AI safety legislation significantly, and expose the public to greater risk. He needs to hear from you!
How you can help
There are several ways to help, many of which are detailed on the SB 1047 website.
The most useful thing you can do is write a custom letter. To do this:
Make a letter addressed to Governor Newsom using the template here.
Save the document as a PDF and email it to leg.unit@gov.ca.gov.
In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context – instead, focus on how the risks are serious and how this bill would help keep the public safe.
Once you’ve written your own custom letter, think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don’t typically engage on tech issues. Then help them write it! You can:
Call or text them and tell them about the bill and ask them if they’d be willing to support it.
Draft a custom letter based on what you know about them and what they told you.
Send them a completed letter as a PDF and ask if they’re willing to send it to the Governor’s office using the instructions above.
Organize an event! To make even more of an impact, you should consider hosting an event to organize people to write letters. Please email thomas@safe.ai if you are interested in hosting an event.
Thank you in advance for any help!