Executive summary: The Winter 2024⁄25 Catalyze AI Safety Incubation Program in London has supported the launch of 11 new AI safety organizations focused on addressing critical risks in AI alignment, governance, hardware security, long-term behavior monitoring, and control mechanisms.
Key points:
Diverse AI Safety Approaches – The cohort includes organizations tackling AI safety through technical research (e.g., Wiser Human, Luthien), governance and legal reform (e.g., More Light, AI Leadership Collective), and security mechanisms (e.g., TamperSec).
Funding and Support Needs – Many of the organizations are actively seeking additional funding, with requested amounts ranging from $50K to $1.5M to support research, development, and expansion.
Near-Term Impact Goals – Several projects aim to provide tangible safety interventions within the next year, such as empirical threat modeling, automated AI safety research tools, and insider protection for AI lab employees.
For-Profit vs. Non-Profit Models – While some organizations have structured themselves as non-profits (e.g., More Light, Anchor Research), others are pursuing hybrid or for-profit models (e.g., [Stealth], TamperSec) to scale their impact.
Technical AI Safety Innovation – A number of teams are working on novel AI safety methodologies, such as biologically inspired alignment mechanisms (Aintelope), whole brain emulation for AI control (Netholabs), and long-term AI behavior evaluations (Anchor Research).
Call for Collaboration – The post invites additional funders, researchers, and industry stakeholders to engage with these organizations to accelerate AI safety efforts.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The Winter 2024⁄25 Catalyze AI Safety Incubation Program in London has supported the launch of 11 new AI safety organizations focused on addressing critical risks in AI alignment, governance, hardware security, long-term behavior monitoring, and control mechanisms.
Key points:
Diverse AI Safety Approaches – The cohort includes organizations tackling AI safety through technical research (e.g., Wiser Human, Luthien), governance and legal reform (e.g., More Light, AI Leadership Collective), and security mechanisms (e.g., TamperSec).
Funding and Support Needs – Many of the organizations are actively seeking additional funding, with requested amounts ranging from $50K to $1.5M to support research, development, and expansion.
Near-Term Impact Goals – Several projects aim to provide tangible safety interventions within the next year, such as empirical threat modeling, automated AI safety research tools, and insider protection for AI lab employees.
For-Profit vs. Non-Profit Models – While some organizations have structured themselves as non-profits (e.g., More Light, Anchor Research), others are pursuing hybrid or for-profit models (e.g., [Stealth], TamperSec) to scale their impact.
Technical AI Safety Innovation – A number of teams are working on novel AI safety methodologies, such as biologically inspired alignment mechanisms (Aintelope), whole brain emulation for AI control (Netholabs), and long-term AI behavior evaluations (Anchor Research).
Call for Collaboration – The post invites additional funders, researchers, and industry stakeholders to engage with these organizations to accelerate AI safety efforts.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.