Applications Open: Pivotal 2025 Q3 Research Fellowship

We are now accepting applications for the Pivotal 2025 Q3 Research Fellowship, a 9-week, fully funded research program for technical AI safety, AI governance and policy, technical AI governance, and AI-Bio.

Dates: June 30 – August 29, 2025
Location: London, at the London Initiative for Safe AI (LISA)
Deadline: Wednesday, April 9 (23:59 CET)

Mentor-First Model

This year, we’re introducing a mentor-first approach. Instead of selecting fellows first and matching them with a mentor later, we are featuring experienced researchers who:

  • have research experience & deep expertise in their fields,

  • are actively working on important open questions,

  • work in specific research directions where fellows can make meaningful contributions.

Applicants apply to one or multiple mentors based on their interests and skills. This structure ensures that research projects are well-scoped from the start, with clear guidance from researchers who have thought carefully about which research contributions would be most valuable. If you’re open to being matched with a mentor, you can also opt to be matched by Pivotal – if you are interested in being matched with a mentor, we also highly recommend applying to the ERA Fellowship!

About the Fellowship

The Pivotal Research Fellowship is designed for people who want to:

  • Engage deeply with the most important research questions in AI safety

  • Advance their research skills and career trajectory in a setting that encourages serious, high-quality research

  • Work alongside leading researchers in a setting that values both intellectual rigor and collaborative learning

The fellowship centres around producing high-quality research output – most commonly a paper, but also potentially a blog post series, policy report, or another form of meaningful contribution. For particularly strong projects, we’re excited to explore support for research extensions beyond the core fellowship period.

Fellows will be based in London and work in person at the London Initiative for Safe AI, with structured mentorship and opportunities to engage with the broader research community.

Fellows from recent years joined organisations such as GovAI, the Institute for Progress, and the UK AISI, joined fellowships at MATS, IAPS, and GovAI, published papers at top conferences, and founded AI safety organisations such as KIRA, Catalyze Impact and Prism Eval.

What Fellows Receive

  • Direct mentorship from established researchers: weekly meetings and asynchronous feedback in between

  • Research support from a dedicated research manager

  • A strong research environment with peers tackling similar research questions

  • £5,000 stipend + weekday meals, with additional support for travel, accommodation, and compute costs

This is our 6th research fellowship, and we continue to refine the program to ensure it gives researchers the structure, mentorship, and environment they need to do meaningful work in AI safety.

If you are interested in technical AI safety, AI governance, or AI-bio intersections and are looking for an opportunity to contribute to high-quality research in a structured, ambitious, and supportive environment, we encourage you to apply.

Apply now

Deadline: Wednesday, April 9 (23:59 CET)

If you have any questions, please send us a message.


Recommend Someone & Earn $100

Know someone who might be a great fit? Refer them here and receive $100 for each accepted candidate we contact through you.

No comments.