Executive summary: In this first of a three-part series, Jason Green-Lowe, Executive Director of the Center for AI Policy (CAIP), makes an urgent and detailed appeal for donations to prevent the organization from shutting down within 30 days, arguing that CAIP plays a uniquely valuable role in advocating for strong, targeted federal AI safety legislation through direct Congressional engagement, but has been unexpectedly defunded by major AI safety donors.
Key points:
CAIP focuses on passing enforceable AI safety legislation through Congress, aiming to reduce catastrophic risks like bioweapons, intelligence explosions, and loss of human control via targeted tools such as mandatory audits, liability reform, and hardware monitoring.
The organization has achieved notable traction despite limited resources, including over 400 Congressional meetings, media recognition, and influence on draft legislation and appropriations processes, establishing credibility and connections with senior policymakers.
CAIP’s approach is differentiated by its 501(c)(4) status, direct legislative advocacy, grassroots network, and emphasis on enforceable safety requirements, which it argues are necessary complements to more moderate efforts and international diplomacy.
The organization is in a funding crisis, with only $150k in reserves and no secured funding for the remainder of 2025, largely due to a sudden drop in support from traditional AI safety funders—despite no clear criticism or performance concerns being communicated.
Green-Lowe argues that CAIP’s strategic, incremental approach is politically viable and pragmatically impactful, especially compared to proposals for AI moratoria or purely voluntary standards, which lack traction in Congress.
He invites individual donors to step in, offering both general and project-specific funding options, while previewing upcoming posts that will explore broader issues in AI advocacy funding and movement strategy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: In this first of a three-part series, Jason Green-Lowe, Executive Director of the Center for AI Policy (CAIP), makes an urgent and detailed appeal for donations to prevent the organization from shutting down within 30 days, arguing that CAIP plays a uniquely valuable role in advocating for strong, targeted federal AI safety legislation through direct Congressional engagement, but has been unexpectedly defunded by major AI safety donors.
Key points:
CAIP focuses on passing enforceable AI safety legislation through Congress, aiming to reduce catastrophic risks like bioweapons, intelligence explosions, and loss of human control via targeted tools such as mandatory audits, liability reform, and hardware monitoring.
The organization has achieved notable traction despite limited resources, including over 400 Congressional meetings, media recognition, and influence on draft legislation and appropriations processes, establishing credibility and connections with senior policymakers.
CAIP’s approach is differentiated by its 501(c)(4) status, direct legislative advocacy, grassroots network, and emphasis on enforceable safety requirements, which it argues are necessary complements to more moderate efforts and international diplomacy.
The organization is in a funding crisis, with only $150k in reserves and no secured funding for the remainder of 2025, largely due to a sudden drop in support from traditional AI safety funders—despite no clear criticism or performance concerns being communicated.
Green-Lowe argues that CAIP’s strategic, incremental approach is politically viable and pragmatically impactful, especially compared to proposals for AI moratoria or purely voluntary standards, which lack traction in Congress.
He invites individual donors to step in, offering both general and project-specific funding options, while previewing upcoming posts that will explore broader issues in AI advocacy funding and movement strategy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.