Announcing the Cambridge ERA:AI Fellowship 2024

The Cambridge ERA:AI Fellowship is excited to announce applications for our eight-week, paid summer research internship in Cambridge, UK are now open.

This year, ERA (formerly Existential Risks Alliance) will be focusing on AI safety and governance research, working in collaboration with several research centres at the University of Cambridge including the Centre for the Study of Existential Risk (CSER), the Leverhulme Institute for the Future of Intelligence (CFI), and the Krueger AI Safety Lab. Fellows in this program will research essential aspects of AI safety, including technical foundations, design principles, and governance frameworks needed to ensure that increasingly-capable AI systems are safe, secure, and reflect human values.

We invite early-career researchers from around the globe, including undergraduate students, to join us from July 1 to August 23, in Cambridge, UK. This is an exceptional chance to steer the rapid progress in transformative AI through safety research and responsible governance.

During the fellowship, participants will receive:

  • Full funding: Fellows receive a salary equivalent to £34,125 per year, which will be prorated to the duration of the Fellowship. On top of this, our fellows receive complimentary accommodation, meal provisions during working hours, visa support, and travel expense coverage.

  • Expert mentorship: Fellows will work closely with a mentor on their research agenda for the summer. See our Mentors page to learn about previous mentors.

  • Research Support: Many of our alumni have gone on to publish their research in top journals and conferences, and we provide dedicated research management support to help you become strong researchers /​ policymakers in the field.

  • Community: Fellows are immersed in a living-learning environment. They will have a dedicated desk space at our office in central Cambridge and are housed together at Emmanuel College, Cambridge.

  • Networking and learning opportunities: We assist fellows in developing the necessary skills, expertise, and networks to thrive in an AI safety or policy career. We offer introductions to pertinent professionals and organisations, including in Oxford and London. In special cases, we also provide extra financial assistance to support impactful career transitions.

Our Research

The rapid advancement of artificial intelligence in recent years has brought about transformative changes across various domains. As AI systems become more sophisticated and autonomous, their potential impact on our society grows exponentially. With this increased capability comes a heightened responsibility to ensure that these systems are developed and deployed in a safe, secure, and reliable manner.

As part of the Cambridge ERA:AI Fellowship, fellows will spend 8-weeks working on a research project related to AI safety. Based on four categories of possible risk — malicious use, AI race, organisational risk, and rogue AIs — we have outlined some ways to address these risks and avenues for further research. This list is far from being exhaustive — instead, we hope it serves as inspiration and guidance for the types of projects we expect to see over the summer.[1]

Who Can Apply?

Anyone! We are looking to support fellows from a wide range of subject areas who are committed to reducing risks posed by advances in AI.

However, we expect the Cambridge ERA:AI Fellowship might be most useful to students (from undergraduates to postgraduates) and to people early in their careers who are looking for opportunities to conduct short research projects on topics related to AI safety and governance. Note that we are currently unable to accept applicants who will be under the age of 18 on 1st July 2024.

The Application Process

We review applications on a rolling basis and urge candidates to apply early, as offers will be extended promptly upon identification of suitable candidates. Please note that the application deadline is April 5, 2024, at 23:59 US Eastern Daylight Time.

The first stage consists of essay-style questions. Applicants who progress to the next stage will be invited to interview. Successful applicants will be notified by May, and afterwards, we will work with accepted fellows to develop their project ideas and pair them with relevant mentors.

If you know someone who would excel in this opportunity, we strongly encourage you to recommend that they apply. Personal recommendations can significantly increase the likelihood of applications, even from those already aware of the opportunity. Additionally, if you lead or are involved in relevant community spaces, please consider sharing an announcement about the fellowship, including a link to our site.

To apply and learn more, please visit the ERA website.

If you have questions about anything else which is not covered on our website or our FAQs, please email us at hello@erafellowship.org.

* Please note that in previous years, ERA’s research focus was broadly on existential risks, including biosecurity, climate change, nuclear warfare, AI safety, and meta topics. This year, we are focusing on AI safety to direct our resources and attention toward an increasingly capable emerging technology. If you are interested in doing research on X-risk/​GCRs outside of AI safety and governance, you may consider applying for UChicago’s Summer Fellowship on existential risk.

  1. ^

    The four categories mentioned here are from the Center for AI Safety’s Report Overview of Catastrophic AI Risks (2023).