Unfortunately, there was an effective effort to tie AI safety advocacy organizations to their funders in a way that increased risk to any high-profile donors who supported federal policy work. I don’t know if this impacted any of your funders’ decisions, but the related media coverage could have been cause for concern (ie Politico). Small dollar donations might help balance this.
It seems very likely that the federal government will attempt to override any state AI regulation that gets passed in the next year. Jason put together a strong, experienced team that can navigate the quickly shifting terrain in Washington. Dissolving immediately due to lack of funding would be an unfortunate outcome at a critical time.
Context: I work in government relations on related issues and met Jason at an EAG in 2024. I have not worked with CAIP or pushed for their model legislation, but I respect the team.
There is a related concern where most of the big funders either have investments in AI companies, or have close ties to people with investments in AI companies. This biases them toward funding activities that won’t slow down AI development. So the more effective an org is at putting the brakes on AGI, the harder a time it will have getting funded.*
Props to Jaan Tallinn, who is an early investor in Anthropic yet has funded orgs that want to slow down AI (including CAIP).
*I’m not confident that this is a factor in why CAIP has struggled to get funding, but I wouldn’t be surprised if it was.
To me, it’s an interesting decision to pull funding because of this type of coverage. There’s a tendency in AIS lobbying to never say what we actually mean to “get in the right rooms” but then when we want to say the thing that matters at a pivotal time, nobody will listen because we got there by being quiet.
Buckling under the pressure of the biggest lobby to ever exist (tech) putting out one or two hit pieces is really unfortunate. Same arguments can be made for why UK AISI and the AI Safety Summits didn’t become even bigger; simply that there was no will to continue the lobbying move and everyone was too afraid of reputation.
Happy to hear alternative perspectives, of course.
Unfortunately, there was an effective effort to tie AI safety advocacy organizations to their funders in a way that increased risk to any high-profile donors who supported federal policy work. I don’t know if this impacted any of your funders’ decisions, but the related media coverage could have been cause for concern (ie Politico). Small dollar donations might help balance this.
It seems very likely that the federal government will attempt to override any state AI regulation that gets passed in the next year. Jason put together a strong, experienced team that can navigate the quickly shifting terrain in Washington. Dissolving immediately due to lack of funding would be an unfortunate outcome at a critical time.
Context: I work in government relations on related issues and met Jason at an EAG in 2024. I have not worked with CAIP or pushed for their model legislation, but I respect the team.
There is a related concern where most of the big funders either have investments in AI companies, or have close ties to people with investments in AI companies. This biases them toward funding activities that won’t slow down AI development. So the more effective an org is at putting the brakes on AGI, the harder a time it will have getting funded.*
Props to Jaan Tallinn, who is an early investor in Anthropic yet has funded orgs that want to slow down AI (including CAIP).
*I’m not confident that this is a factor in why CAIP has struggled to get funding, but I wouldn’t be surprised if it was.
To me, it’s an interesting decision to pull funding because of this type of coverage. There’s a tendency in AIS lobbying to never say what we actually mean to “get in the right rooms” but then when we want to say the thing that matters at a pivotal time, nobody will listen because we got there by being quiet.
Buckling under the pressure of the biggest lobby to ever exist (tech) putting out one or two hit pieces is really unfortunate. Same arguments can be made for why UK AISI and the AI Safety Summits didn’t become even bigger; simply that there was no will to continue the lobbying move and everyone was too afraid of reputation.
Happy to hear alternative perspectives, of course.