Executive summary: The London Initiative for Safe AI (LISA) aims to become a premier AI safety research hub by cultivating a supportive environment for individual researchers and small organizations to advance impactful work and novel agendas, while strengthening the broader London ecosystem.
Key points:
LISA provides financial stability, operational support, and a collaborative research environment tailored to the needs of AI safety researchers and organizations.
Over 2 years, LISA plans to mature member organizations, influence alumni career trajectories, uncover new agendas, and nurture future AI safety leaders.
LISA has made early progress housing impactful papers, developing new agendas via MATS, and seeing residents hired by top groups since opening in September 2023.
Risks like researcher underperformance or retention will be mitigated through diversification, expert evaluation, regular impact reviews, and an adaptive approach.
LISA’s theory of change focuses on empowering high-potential, epistemically diverse AI safety talent to produce influential work.
Attracting top researchers is facilitated by an advisory board, evolving to meet needs, an appealing proposition, facilitating industry roles, and viewing the effort as complementary to government initiatives.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The London Initiative for Safe AI (LISA) aims to become a premier AI safety research hub by cultivating a supportive environment for individual researchers and small organizations to advance impactful work and novel agendas, while strengthening the broader London ecosystem.
Key points:
LISA provides financial stability, operational support, and a collaborative research environment tailored to the needs of AI safety researchers and organizations.
Over 2 years, LISA plans to mature member organizations, influence alumni career trajectories, uncover new agendas, and nurture future AI safety leaders.
LISA has made early progress housing impactful papers, developing new agendas via MATS, and seeing residents hired by top groups since opening in September 2023.
Risks like researcher underperformance or retention will be mitigated through diversification, expert evaluation, regular impact reviews, and an adaptive approach.
LISA’s theory of change focuses on empowering high-potential, epistemically diverse AI safety talent to produce influential work.
Attracting top researchers is facilitated by an advisory board, evolving to meet needs, an appealing proposition, facilitating industry roles, and viewing the effort as complementary to government initiatives.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.